gpt/mdadm/debian stable/failed drive replacement howto

It’s this time again. Last time in 2012. Fast forward to 2021. Here we are again. This time it a little different. The drive has not failed yet but it shows signs of failures.

smartctrl says it cannot read few sectors.

Waring: One thing you learn over many years working with computers, servers, etc, is that you CANNOT ignore hardware failures. They will bite you back if you think you can leave it for few extra day(s). Mine, as well as your policy should be: if you get a warning that something is wrong, you need to act. If you work for a business that means you ship the new drive overnight. No excuses should be allowed in this regard.

With that hardware failure policy you and your business has better chances.

Debian Stable.

Install gdisk 
aptitude install gdisk

Show details of partition md0
mdadm --detail /dev/md0
*2021 drive failing.
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb1[4] sda1[3] sdc1[1]
3907023872 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
**Note my 2012 failure was this:
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc1[1] sdd1[2]
3907028864 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]

Since our drive did not fail yet but will soon, we will mark it as failed.

mdadm /dev/md0 -f /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0

If we didn’t fail it we would get this error if we tried to remove it in next step:

mdadm /dev/md0 -r /dev/sdb1
mdadm: hot remove failed for /dev/sdb1: Device or resource busy

Lets remove the drive from mdadm: (not if you don’t know if ifs sdb1 you can run lsblk to confim)

mdadm /dev/md0 -r /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md0
#We can see our mdadm now shows the drive missing
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda1[3] sdc1[1]
3907023872 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]


If you are putting it in a same slot you should have it mark as same name, but we need to make sure. If the drive name changed you would not want to be making partitions changes on a wrong drive. MAKE SURE NEW DRIVE IS STILL sdb.
Look how disk is structured and what partition type it has
sgdisk -p /dev/sdb
Disk /dev/sdb: 3907029168 sectors, 1.8 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 0ED13F81-6EEA-4E12-9F27-DD806CF1F09C
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 8-sector boundaries
Total free space is 0 sectors (0 bytes)

#sgdisk -R=/dev/TO_THIS_DISK /dev/FROM_THIS_DISK
sgdisk -R=/dev/sdb /dev/sda
#Give new GUID since above options clones the disk including GUID
sgdisk -G /dev/sdb

Now readd the drive to md0 
mdadm /dev/md0 -a /dev/sdb1

Check the status
cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[4] sda1[3] sdc1[1]
3907023872 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
[>………………..] recovery = 0.0% (253788/1953511936) finish=384.8min speed=84596K/sec

#....few minutes later

cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[4] sda1[3] sdc1[1]
3907023872 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
[=>……………….] recovery = 7.4% (145761912/1953511936) finish=391.5min speed=76950K/sec

Done. Check back in few hours to see if it finished.
Keywords: fdisk,sdisk, sgdisk, gdisk,parted,gpt, mbr,raid5,mdadm,linux,debian,business, dell,hp,server,policy finish=786.3min speed=41401K/sec
Done. Check back in few hours to see if it finished.
Keywords: fdisk,sdisk, sgdisk, gdisk,parted,gpt, mbr,raid5,mdadm,linux,debian,

Enabling HTTP/2

aptitude or apt-get

aptitude install php-fpm
a2enmod proxy_fcgi setenvif
a2enconf php7.3-fpm

#Lets enable mpm_event

a2dismod php7.3
a2dismod mpm_prefork
a2enmod mpm_event

/etc/init.d/apache2 restart

a2enmod http2

Browsers don't support http2 protocol unless its over TLSv1.2. You can disable older version of encryption if you would like. 
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1

Few Errors that came up. The order above fixes the issues.

a2enmod mpm_event
Considering conflict mpm_worker for mpm_event:
Considering conflict mpm_prefork for mpm_event:
ERROR: Module mpm_prefork is enabled – cannot proceed due to conflicts. It needs to be disabled first!

a2dismod mpm_prefork
ERROR: The following modules depend on mpm_prefork and need to be disabled first: php7.


Mysql/MariaDB Upgrade from Debian 9 to 10

I read the release notes for Debian:

Did the standard upgrade from Debian 9(stretch) to Debian 10 (buster)

In this blog we will cover the proper way to upgrade mysql as to not to get this problem. While I’m not saying this covers all steps mysql/mariadb upgrade, its a good lesson on what to expect if upgrade is done incorrectly.

apt update
apt-get upgrade
sudo mysqldump --all-databases -u root -p > debian_9to10_20191202_mysqlbackup.sql
###Conclusion of this blog post:You need to restart before the upgrade. ###
###The key is to release any locks on a database from a apache2 or other ###
###services that have active connections. ###
###So Reboot at this step (reboot here too) ###
change the /etc/apt/source.list to new debian buster stable.
apt update
apt-get upgrade
#went through standard review of install package maintainer stuff or show diff.
apt full-upgrade
#went through standard review of install package maintainer stuff or show diff.

This what happened. Watch the failure, and concluding next steps.
Unfortunately mariaDB did not start, and started to get error messages.

[ERROR] InnoDB: Failed to find tablespace for table lm.recall_db in the cache. Attempting to load the tablespace with space id 103
2019-12-08 22:24:36 171 [ERROR] InnoDB: Failed to find tablespace for table mysql.innodb_index_stats in the cache. Attempting to load the tablespace with space id 2
2019-12-08 22:24:36 171 [ERROR] InnoDB: Refusing to load ‘./mysql/innodb_index_stats.ibd’ (id=105, flags=0x21); dictionary contains id=2, flags=0


[ERROR] InnoDB: In pages [page id: space=0, page number=11412] and [page id: space=0, page number=11399] of index url of table wordpress.wp_redirection_404

tail -f /var/log/mysql/error.log

Because I use InnoDB there is no recovery/repair options because it not suppose to crash? The logs talk about

mysql [ERROR] InnoDB: Corruption of an index tree: table dump + drop + reimport



In order to do anything I had to get mysql started. We need to start the recovery mode. Read online but we are starting from 1, then take one step at a time.
vi /etc/mysql/mariadb.conf.d/50-server.cnf

and add

innodb_force_recovery =1

You continue

mysqldump –all-databases –add-drop-database –add-drop-table > dump_after_crash_20191208.sql -u root -p

  • Notes:
  • [Deals with TABLESPACE and possible recovery?]
  • [Deals with possibly using optimize command to fix the table, in my case as soon as I did anything to a broken table mariadb/mysql server crashed and need to be restarted. This Article was helpful to learn the thought process]
  • [similar to above trying to use optimize to fix table]
  • [helpful in the trying to find the issue, after that it gets into drop, create insert data which I didn’t want to do]
  • [Comments on recovery in InnoDB and how it not suppose to crash, I’m running on bare hardware so not virtualized ]
  • [Trying to mysqldump after upgrade and crash to see if I can export data before I’m forced to completely corrupt the data, first mention the PV program]
  • [This now gets toward the end aka playing with .ibd and .frm files which are files that mysql uses to store data. I was not able to use this info other to learn the process of possible recovery]
  • [Similar issue as mine, Tablespace is missing for table ‘mysql/innodb_index_stats’ , this one is for windows so not related but good to see how they troubleshoot]
  • googling: InnoDB: You should dump + drop + reimport the table to fix the corruption.
  • [related to my problem, using command mysqlcheck --all-databases -u root -p when it gets to a table that is corrupt it crashes the whole mysql server. Its hard to fix it if DB is not ON.
  • [] Lets read up on recovery options for innodb.
  • As I proceed from 1 to 2 to 3, googling this exact search yelded few links “innodb_force_recovery = 4”
  • [] Quick intro on repairing corrupt innodb.
  • [] bug that seems to related but its old.
  • [] Getting dangerous. At this point the hopes of recovery are low, and knowing we have a full dump prior to upgrade we are forcing innodb recovery of corrupter database. Please note that if this is bad. copy mysql folders to make a copy of db files if need be.
  • [] More technical force recovery to learn more on what it does.
  • [] Mysqlbug that has similar errors as mine, except it was running Windows Enterprise 64bit
  • [] Another ticket. Somewhat similar but not Really. Do read do.
  • [] ok read on what people are saying.
  • [google search “innodb_index_stats Warning : Tablespace is missing for table “] We are recovering individual tables to see if its just one table preventing recovery of others.
  • [] Related to innodb_index_stats innodb_table_stats slave_master_info slave_relay_log_info slave_worker_info
  • [] yet more on innodb_table_stats
  • [] This now gets very deep. We are playing with idb and frm files… be aware what this means. No going back at this point.
  • [] More playing with tables, and restoring and overwriting and changing structures… this is bad.
  • [] Good read on a bug, upgrade causes crash.
  • [] What is wp-redirection-404 table?
  • [google “mysql dump restore table all databases how to resture only one”] Trying to restore failed tables from backup. It looks like you can’t restore one table if you backuped all tables without extra scripts. []

[] Backup and restore. Learning how to restore to understand what can go wrong.

As you imagine my story didn’t end well. After 2 days we were left with no choice but to restore from a mysqldump we did prior to upgrade. I hope this allows you to understand what could go wrong and make appropriate changes to your upgrade process to make sure you are covered.

Have fun!


Upgrade from Dell Latitude E7470 to Dell XPS P56F002

  • New Looks, Sharp
  • The 15inch screen vs 14 with small edge makes it look like you have a 17inch in front of you.
  • Thunderbolt:
    • you got to buy this Startec:TB3DKM2HD Docking Station:
      plugin network, 2x hdmi, 1x USB 3.0, 1x USB2.0.
    • In contrast to other Startech docking station that run over USB-C(1 hdmi only) or dell docking station based on usb-c. usb-c is bad, thunderbolt is good). If you buy one over thunderbolt it will support 2x hdmi, so you can connect your standing desk screens.
    • Debian 10: F*** Amazing …wow what a difference . No wonder our people in Information Technology IT Department(s) are using Debian 10 as a default Operating System on daily basis.
  • Lets get our bad boy setup:
    • open terminal
    • tasksel (Select Gnome,Laptop,Print, (if you are developer then web server too)
    • Desktop Done!
    • Lets move our stuff from old laptop:
    • Whats my new computer IP:
      ip address (not that sudo ifconfig is no longer installed by default)
      looks like 3rd network card: ens1 has an address of xx.xx.xx.xx
    • What is my old computer ip address:
  • Lets move it
    • sudo apt-get install rsync
    • (if you have not added yourself to sudo do it now,
    • su root
    • adduser lucas sudo;
    • loggoff and log back in.
    • rsync -aAXv –exclude=/lost+found –exclude=/root/trash/* –exclude=/var/tmp/* –exclude=/home/* /mnt/src/* /mnt/dest
    • rsync -aAXv –exclude=/tmp –exclude=/mySuperOldFolder/* ssh://lucas@yy.yy.yy.yy/home/lucas/projects /home/lucas/
    • rsync -aAXv –exclude=/tmp –exclude=/mySuperOldFolder/* -e ssh lucas@yy.yy.yy.yy:/home/lucas/projects /home/lucas/

Lets get to work!

Let’s try the octane score, compare before and after:

Moving / (root partition) to NVME while keeping /home on HDD

I need to move my running Linux Debian machine to my new m.2 nvme Samsung drive to gain 10x IO speed improvement. It’s crazy fast!

    • The procedure is very similar as below
    • Prerequisites:
        • You already installed NVME and partitioned it
        • You have gpt2 with grub partition similar to :
          /dev/nvme0n1p1 [ 2.00 MiB]
          /dev/nvme0n1p2 [ 550.00 MiB]
          /dev/nvme0n1p3 [ 465.22 GiB]
        • You have mount your /dev/nvme0n1p3 partition to /nvme to confirm your current system sees everything and can write to it.
    • Lets continue:
    • We mount everything using Debian live as in the link
    • We rsync src and dest (exclude home)

rsync -aAXv --exclude=/lost+found --exclude=/root/trash/* --exclude=/var/tmp/* --exclude=/home/* /mnt/src/* /mnt/dest/

    • We continue with instructions on the other page and when its all done lets make sure we upate /etc/fstab
    • We update fstab old partition to new:

# change current "/" to "/home"
#change "/nvme" to "/"

    • At this point I would recommend you restart, but note you will only be able to login via a command line. aka CTRL+ALT+F2, because when we reboot the /home is at /home/home and gnome-shell will not like that. We will need to move it, and move everything else to a temporary root_home folder we will create. The reason I’m recommending reboot is to make sure you have done everything properly. If it boots you are good to now move the old unused files, if it doesn’t boot, you can still go back to the old system by reverting /etc/fstab and try instructions again.
    • If you confirmed its all good, and only then let’s do below:

cd /home
mkdir root_home
mv /home/* /home/root_home
mv /home/root_home/home/* /home/


  • Move /var/log to /home/log
  • To move /var/log to /home/log we will need to rsync everything then mount it.
    /etc/init.d/rsyslog stop
    cd /home/
    sudo mkdir log
    cd /var
    sudo rsync --remove-source-files -azv /var/log/ /home/log
    #Note I had to repeat above multiple times because there were other services like apache and mysql
    #Now lets edit fstab
    sudo vi /etc/fstab
    #Add a mount point that tells the syste to link /home/log as /var/log. This way all logs go to hdd,while rest of your system runs on nvme/ssd.
    /home/log /var/log auto defaults,nofail,nobootwait,bind 0 2
  • Enjoy!

Cookiecutter – Modify context in

Cookiecutter is a template where you can setup skeleton of a project, and based on parameters from cookiecutter.json it will prefill all files with the supplied values.

Advance Cookiecutter Question:
How can I add new context based on what was submitted from cookiecutter.json, then come up with my own variations, and pass them back to context/extra_context to be rendered.

  • Example1: if {{ cookiecutter.project_name}} == myapp and {{cookiecutter.github_username }} ==lszyba1 then add a new context variable called: mygreatuser=’ProSupport’. Then in template files I would use that variable to fill in some values.
  • Example2: if {{cookiecutter.framework_to_deploy}}==’pyramid’:
    deployment_prod_or_dev_file = ask_more_questions(….) (This allows me to write my own function to ask more question, if user said pyramid then ask X, if he said django then ask Y.)
  • Example3: import os ; workfolder=os.getcwd() ; context[‘workfolder’]=workfolder (this will insert a new variable I can render in template.


Cookiecutter has hooks folder, but this does not allow context to be modified out of the box, so we need to add 10 lines of code.
Since and get rendered with only the values from cookiecutter.json, I will use to do my programming, add more values to context then re-render the files using mako.
This allows me to have all files rendered with jinja2 syntax (cookiecutter default), and all my template variables will be left alone, and will be rendered by me using mako.
This also allows me to mix and match code in where I use cookiecutter original context in my python if statements.

Lets get started:
Create folder hooks and add file

-- {{cookiecutter.folder_name}}
   |-- {{cookiecutter.package_name}}.conf
|-- {{cookiecutter.package_name}}.txt
|-- README.txt
|-- cookiecutter.json
-- docs
-- hooks


cookiecutter.json contains:
"folder_name": "apache2",
"package_name": "myapp",
"domain_name": "",
"framework_to_deploy": ["pyramid", "django"]

My .conf file will be called myapp.conf if you just hit enter through prompts.

Now lets go into

#First get cookiecutter context dictionary:
#I can add values to it like this.:
#Left is new context I will use in mako, the right is context and info supplied from original cookiecutter.json file

#Add few more:<code>
import os

#Now lets re-render my template(s) with my additional 3 variables (aka workfolder,better_package_name,user_name..)
from mako.template import Template
from mako.lookup import TemplateLookup

mylookup = TemplateLookup(directories=[workfolder],strict_undefined=True)

def serve_template(templatename, **kwargs):
    mytemplate2 = mylookup.get_template(templatename)
    print('Rendering: ' + templatename)
    return mytemplate2.render(**kwargs)

#This loops through a files in a workfolder. I need more testing to confirm the work folder is where I think it is, so for now I will name the files explicatively.
#Render each template explicitly
def save_template(workfolder=None,file_name=None,context=None):
    print('Saving: '+workfolder+'/'+file_name)

#Now the lets Re-Render my template with my 3 new variables.
file_name= context['package_name']+'.conf'
#or below is also works.
file_name= {{cookiecutter.package_name}}+'.conf'

#Done, now the template contains all my new variables.

Here is a template sample.

##########Start of {{cookiecutter.package_name}}.conf ########

#I like your new project {{cookiecutter.package_name}}. I think it will be awsome, but you should consider giving it a better name ${better_package_name}. 

#Beginning the configuration per ${user_name} instructions

#some code,conf,etc...
Alias ${better_package_name}/{{cookiecutter.package_name}}.txt ${workfolder}/${package_name}
##########End of {{cookiecutter.package_name}}.conf ########


#Ask questions based on cookiecutter parameters
def ask_more_questions(question=None):
        output = input(question)
    except NameError:
        output = raw_input(question)
    return output

if context['framework_to_deploy']=='pyramid':
    deployment_prod_or_dev_file=ask_more_questions('What file you want to deploy: [development.ini] or production.ini :')
    context['deployment_prod_or_dev_file']=(deployment_prod_or_dev_file or 'development.ini')

#Now For every input question from cookiecutter I can do if statements, if a, then b...

Hope you enjoyed it. Have fun creating your awesome new template. Many thanks to cookiecutter team for making simple yet powerful project templating software! team

Use the tools available: This post is about being able to programmatically add values based on initial cookiecutter.json context. This is not about which python templating language is better.

AirMouse (MX3)- Disable Power Button (linux)

**Kids pressing the power button on a remote, and on a laptop**
**You want your computer to be always on**
**#Whyairmouse see below**

    In order to disable power button,

  • go to settings in gnome shell. (or search settings)
  • Click on keyboard
  • Go to Window
  • Then using your airmouse click “Toggle Full Screen”
  • and press Power Button

*Warning, pressing power button on a laptop or a computer makes a page full screen.
*Long Pressing the power button will still force shutdown in case your computer is frozen.
*Follow these instructions only if you expect the computer to be always on

Now the Button is disabled and pressing it will make Firefox or chrome full screen mode so you can play cartoons dinosaurs on youtube for your kids, instead of kids pressing power button and turning off your Linux computer.

AirMouse Remote for Linux

– If you have an old laptop
– Place it behind a TV and plug it in using HDMI
– Setup autologin so it doesn’t ask for password, setup no screen saver, never turn off.
– Install kodi,flash,etc
– Buy airmouse, plug in the usb, and now you have a mouse and keyboard in a remote.


Quick Intro to Cassandra vs MongoDB with python

Cassandra Nosql

    Cassandra Conclusion:

  • “One way that Cassandra deviates from Mongo is that it offers much more control on how it’s data is laid out. Consider a scenario where we are interested in laying out large quantities of data that are related, like a friend’s list. Storing this in MongoDB can be a bit tricky – it’s not great at storing lists that are continuously growing. If you don’t store the friends in a single document, you end up risking pulling data from several different locations on disk (or on different servers) which can slow down your entire application. Under heavy load this will impact other queries being performed concurrently.”[1]
  • If you have a project that is mature, it requires a lot of consecutive data that you will want to read later without jumping around to different disks. Cassandra looks like a strong candidate for:
    1. Show last 50 items for “TheMostIntrestingPersonInTheWorld”: item1,item2,..item3000..
    2. Show me last comments on “TheLucasMovie”: comment1,comment2,comment3,
    3. Show water level in Louisiana RiverIoT: level at 8am,level at 8:01am,level at 8:02am, x 100-1000 locations
  • Great if you have data structure already setup, and it fits above model. [2][3]


    MongoDB Conclusion:

  • No structure. import mongodb, mydb = db.myawsomedatabase, mydb.insert(start adding data). Done.
  • You have a project and you are not sure how NoSQL will handle it but you want to try it. [4]
  • You have a working process but its grown to a point where traditional RDMS can’t handle the IO load. [5]
  • You don’t have time to create table structures just now, you just want to get going, and see what happens.
  • You want to find documentation with python fast, and benefit from large community examples.

Cassandra Python
Cassandra Code in Python; Details:

#Add cassandra repo to /etc/apt/sources.list
deb 37x main
sudo apt-get update
update-alternatives --config java  #pick openjdk 8
sudo apt-get install cassandra
nodetool status
nodetool info
nodetool tpstats
virtualenv -p python3 env_py3
source env_py3/bin/activate
pip install cassandra-driver


from cassandra.cluster import Cluster
session = cluster.connect()

#nodetool status
#nodetool info
#nodetool tpstats

session.execute("CREATE KEYSPACE vindata WITH replication = { 'class': 'SimpleStrategy', 'replication_factor': '1' }")
session.execute("use vindata")
# slide 23
CREATE TABLE emissions (
vin text,
make text,
year text,
zip_code_of_station text,
co2 text,
year_month_key int,

#Load mydata

import glob
session.execute("use vindata")

for datafile in glob.glob("./data/*.dat"):
    f=open(datafile, 'r')
    for row in f.readlines():
        INSERT INTO emissions (vin, make, year,zip_code_of_station,co2,year_month_key)
        VALUES (%s,%s,%s,%s,%s,%s)

future=session.execute_async("SELECT * FROM emissions where vin='1B4GP33R9TB205257'")
rows = future.result()
for row in rows:

MongoDB and Python
MongoDB Code in Python; Details:


sudo aptitude install mongodb
/etc/init.d/mongodb start
virtualenv -p python3 env_py3
source env_py3/bin/activate
pip install pymongo


from pymongo import MongoClient
client = MongoClient('mongodb://localhost:27017/')
#create database
db = client.vindata
#create collection/table
emissions = db.emissions

#Load data from mydata
import glob
for datafile in glob.glob("./data/*.dat"):
    f=open(datafile, 'r')
    for row in f.readlines():


import pandas


Move root partition to home partition

If you have made these mistakes when installing your Linux Debian Server at home or work, this article might help you.

* You split the drive into / (root portion) and /home portion. 3 years later and 1-2 Debian upgrades you run out of space on root. You need to move it off to home partition that has 400GB+.
* You have installed ext3 on your / (root partition) instead of xfs for your server. You are finding that mdadm and ext3 and long running servers are not playing so nicely when things go not as planned. Your home partition is xfs so now you would like to move it off to home to be on xfs.

Moving root partition to home. This process should not be taken lightly. You should not be doing this if you are not having any problems. You are doing it at your own risk and as a result you need to read up on what each of these commands might do to your system. You should not be replacing, deleting any data. You should not be deleting old root until a week later when you confirmed it all worked.

Prep Work:
When your computer boots and grub menu shows up press “e” to see what your grub is doing and write it down. Mine was doing below. When I search for some troubleshooting howto solutions the key why they didn’t work is because these modules were not loaded. insmod part_gpt and insmod ext2
in grub menu
insmod part_gpt
insmod ext2
set root='(hd0,gpt1)'
search --fs-uuid --set a80.....
linux /vmlinuz-3.2.9-4-amd64 root=/dev/mapper/my_lvmgroup_root

*Download Debian live cd
*My case, gpt partitions. sda only has my boot partiton. My sdb,sdc,sdd contain 4TB raid 5 with 3 lvm groups for root,swap,home.

*Use lucasmanual to mount the raid lvm from a live Debian Cd

*Mount source (root partition) and destination (home partition)
#root access
sudo -i
cd /mnt
mkdir src
mkdir dest
mount /dev/mapper/my_lvmgroup_root src
mount /dev/mapper/my_lvmgroup_home dest

*Now the key is that you will need to create a home folder inside your home partition and move all the files there.
cd /mnt/dest/
mkdir home

#now move the files you need. This will make current system accessible only through shell. You will need to access it via pressing for example Alt +F2 and using command line. So be prepared to have a backup tablet to troubleshoot.
#cp /mnt/dest/lucas /mnt/dest/home/lucas
#….keep going.

*Now lets sync root partition onto home. Many web pages says to exclude proc,sys,tmp,etc but I decided to copy it since it should work either way.
*I have added other excludes as I see fit

rsync -aAXv --exclude=/lost+found --exclude=/root/trash/* --exclude=/var/tmp/* /mnt/src/* /mnt/dest/

*If your boot partition is located elsewhere. (You can tell if its mounted to a different partition in /mnt/src/etc/fstab. you need to mount it in dest.

*Mount the uuid of the boot into the /mnt/dest/boot
mount UUID=abc...123 /mnt/dest/boot

*I made a copy of that just in case in
#cp -r /mnt/dest/boot /mnt/dest/boot_copy

*Next day when a copy is done. Its time to update grub and /etc/fstab

umount /mnt/src
*Lets create a root folder where we will mount old partition.
mkdir /mnt/dest/root_old

*Lets update /etc/fstab aka
vi /mnt/dest/etc/fstab
#And change your /home partition to /
#Change your old / to /root_old
#The last item on the line is the order for fsck. 0 = nocheck, 1 = check first, 2=secondary check. So update your new / to 1 so that fsck gets it checked as needed.

*First in order to update grub we need real dev,proc,sys from the currently running system. (not these are from the live cd not /mnt/src)

mount -o bind /proc /mnt/dest/proc
mount -o bind /dev /mnt/dest/dev
mount -o bind /sys /mnt/dest/sys

*Now we will chroot…this changes the “/” to point elsewhere in my shell temporally.
chroot /mnt/dest

*You should see
Generating grib.cfg
Found background image....
Found linux image....
found initrd image....
Found Debian GnuLinux 7.8 on /dev/mapper...
#Note for some reason my always says it found it on my root lvm not home lvm..but these instructions did work

*Install grub on your first device where you have current grub. This was required for me because even do I did update-grub it would load into the old / (root partition).
grub-install /dev/sda

*When system reboots and grub menu shows up press e to see if it points to my_lvmgroup_home.

*Notes 1:
I guess one of the points for me was: on the old / there was a boot folder that had the old scripts pointing to lvmgroup_root….I renamed it to boot_notused….
I loaded the system, mounted everything src,dest,dev,sys,proc then
grub-install /dev/sda

* Note 2: when I loaded on lvmgroup_home as / I needed to not only do update-grub but also grub-install /dev/sda

*Note 3: When searching for help somebody said “delete old root partition” and it should work…WRONG…NEVER DELETE old partition. With Linux as long as you don’t delete or overwrite your data you will be always able to go back. So don’t listen to people who tell you to delete your data. There is no going back from that.

*Note 4: At some time in the future I wanted to redo my Raid5 with proper gpt starting not at 63 sector but at 2048 byte. I moved my old raid data to a temporary drive 2TB, (made sure the drive boots and runs fine) deleted and recreated my new raid5 on new partitions tables in each drive and copied data back to raid5. When I follow all the steps above the system said it couldn’t find the /dev/mapper/mygroup_home… When I chroot i did mount -a and then I needed to also do a update-initramfs -u to correct that.

#Guide that I was sticking by

#1 had missing mounts for proc,dev,sys

but the mkinitcpio was not required…didn’t reasearch waht it is

talks about grub and what it is

some other steps that are very similar

some other steps that are very similar

for note4 a little details on cylinder alignment sector 63 byte vs 2048 byte