-
- Service Provider
- Posts: 50
- Liked: 4 times
- Joined: Apr 25, 2022 6:18 pm
- Full Name: Bostjan UNIJA
- Contact:
VeeamSure (problems with Fedora VM's not getting IP in LAB)
Hi.
Case number: # 07015860
Veeam support team "gave up".
Issue:
- When we run VeeamSure procedure Fedora Linux VM's don't get an IP in Virtual Lab environment.
Veeam support suggestions was to go over steps below:
The following steps performed on production VM will resolve the issue:
1. Open PCI buses and devices list by command lspci -D and find the network card. You'll need this in point 2.
2. You should have a file /etc/udev/rules.d/70-persistent-net.rules which contains a line similar to the following:
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="04:01:076e:01", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
Make a backup of this file(outside of the rules.d) and edit the original to:
SUBSYSTEM=="net", ACTION=="add", KERNELS=="0000:00:03.0", NAME="eth0"
- update KERNELS= value to the 1st column of lspci -D. This will assign interface to PCI bus address instead of MAC.
Comment out string with assignment to mac address as shown on the image.
3. Comment out string containing hardware address in eth0 config file /etc/sysconfig/network-scripts/ifcfg-eth0.
4. Reboot.
There is a problem that at step2, we don't have any files in "/etc/udev/rules.d/" in this folder.
Anyone ever managed or struggled with the same issue with Fedora Linux VM's and IP issue in Sure Lab?
Btw, Ubuntu VM in same enviroment does not have this issue. IP's are resolved sucessfully in VirtualLab.
Please advise.
Case number: # 07015860
Veeam support team "gave up".
Issue:
- When we run VeeamSure procedure Fedora Linux VM's don't get an IP in Virtual Lab environment.
Veeam support suggestions was to go over steps below:
The following steps performed on production VM will resolve the issue:
1. Open PCI buses and devices list by command lspci -D and find the network card. You'll need this in point 2.
2. You should have a file /etc/udev/rules.d/70-persistent-net.rules which contains a line similar to the following:
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="04:01:076e:01", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
Make a backup of this file(outside of the rules.d) and edit the original to:
SUBSYSTEM=="net", ACTION=="add", KERNELS=="0000:00:03.0", NAME="eth0"
- update KERNELS= value to the 1st column of lspci -D. This will assign interface to PCI bus address instead of MAC.
Comment out string with assignment to mac address as shown on the image.
3. Comment out string containing hardware address in eth0 config file /etc/sysconfig/network-scripts/ifcfg-eth0.
4. Reboot.
There is a problem that at step2, we don't have any files in "/etc/udev/rules.d/" in this folder.
Anyone ever managed or struggled with the same issue with Fedora Linux VM's and IP issue in Sure Lab?
Btw, Ubuntu VM in same enviroment does not have this issue. IP's are resolved sucessfully in VirtualLab.
Please advise.
-
- VP, Product Management
- Posts: 6749
- Liked: 1408 times
- Joined: May 04, 2011 8:36 am
- Full Name: Andreas Neufert
- Location: Germany
- Contact:
Re: VeeamSure (problems with Fedora VM's not getting IP in LAB)
The issue is the following.
VMware VM configuration by default gives VMs random MAC addresses.
If you restore a VM to another place or in case of a major disaster to a new storage, then this VM will get a new MAC address from VMware.
Operating Systems bind IP addresses differently, some use the PCI slot number for the NIC to bind the IP address to, and some use the MAC address of the NIC to bind it to. If at restore (and same in the VirtualLab) the VM gets a new random IP, then the old IP address does not get bound to the NIC.
I would like to highlight here that SureBackup exactly did his job here. It warned you early that in case of a disaster you would have lost your IP addresses in production after restore because of the misconfiguration.
2 ways out:
1) Go to the VM configuration and define a fixed MAC address. THis way after restore the VM would get the same MAC address. Then go to linux VM and set the IP address again to this NIC with this specific MAC.
This has some side effects. For SureBackup you will get a "doulbe MAC" warning which you need to ignore (as the MAC is not used in the same network/subnet). But at restore to different name in production you would end up with 2 same MACs if the old VM still exist (don´t boot it after restore and change the MAC in the VM config then).
The better way:
2) Teach your linux to bind the IP address to the PCI NIC slot. Which support gave you a sample on how others solved it. There are zillions of Linux distributions and configurations out there. I would just google or reach out to the Linux vendor support on how to set this in your specific version/configuration.
VMware VM configuration by default gives VMs random MAC addresses.
If you restore a VM to another place or in case of a major disaster to a new storage, then this VM will get a new MAC address from VMware.
Operating Systems bind IP addresses differently, some use the PCI slot number for the NIC to bind the IP address to, and some use the MAC address of the NIC to bind it to. If at restore (and same in the VirtualLab) the VM gets a new random IP, then the old IP address does not get bound to the NIC.
I would like to highlight here that SureBackup exactly did his job here. It warned you early that in case of a disaster you would have lost your IP addresses in production after restore because of the misconfiguration.
2 ways out:
1) Go to the VM configuration and define a fixed MAC address. THis way after restore the VM would get the same MAC address. Then go to linux VM and set the IP address again to this NIC with this specific MAC.
This has some side effects. For SureBackup you will get a "doulbe MAC" warning which you need to ignore (as the MAC is not used in the same network/subnet). But at restore to different name in production you would end up with 2 same MACs if the old VM still exist (don´t boot it after restore and change the MAC in the VM config then).
The better way:
2) Teach your linux to bind the IP address to the PCI NIC slot. Which support gave you a sample on how others solved it. There are zillions of Linux distributions and configurations out there. I would just google or reach out to the Linux vendor support on how to set this in your specific version/configuration.
-
- Service Provider
- Posts: 50
- Liked: 4 times
- Joined: Apr 25, 2022 6:18 pm
- Full Name: Bostjan UNIJA
- Contact:
Re: VeeamSure (problems with Fedora VM's not getting IP in LAB)
Thank you for your reply.
I have a few more questions if I may.
In this environment customer has (besides Fedora Linux distributions) also Ubuntu Linux distribution.
I had noticed that for this particular UBUNTU SureBackup is able to do a HeartBeat test but on Fedora not even HeartBeat is able to be done. Is this related to your comment above?
If we teach the linux to bind the IP address to the PCI NIC slot, does the VM have the same PCI NIC slot in the production as in SureBackup Lab environment? Or in SureBackup VirtualLab VM will get different PCI NIC slot?
Thank you.
I have a few more questions if I may.
In this environment customer has (besides Fedora Linux distributions) also Ubuntu Linux distribution.
I had noticed that for this particular UBUNTU SureBackup is able to do a HeartBeat test but on Fedora not even HeartBeat is able to be done. Is this related to your comment above?
If we teach the linux to bind the IP address to the PCI NIC slot, does the VM have the same PCI NIC slot in the production as in SureBackup Lab environment? Or in SureBackup VirtualLab VM will get different PCI NIC slot?
Thank you.
-
- VP, Product Management
- Posts: 6749
- Liked: 1408 times
- Joined: May 04, 2011 8:36 am
- Full Name: Andreas Neufert
- Location: Germany
- Contact:
Re: VeeamSure (problems with Fedora VM's not getting IP in LAB)
Correct, Ubuntu, Windows and other Linux distributions bind the IP to the NIC on the specific slot not the MAC of the NIC.
This is why Ubuntu works out of the box while Surebackup gives you the correct error that your production would loose the IP with Fedora. Which is in the end a misconfiguration that should be corrected in your production VMs to avoid issues at disaster recovery. (Independent of if you use Veeam or other backup products).
This is why Ubuntu works out of the box while Surebackup gives you the correct error that your production would loose the IP with Fedora. Which is in the end a misconfiguration that should be corrected in your production VMs to avoid issues at disaster recovery. (Independent of if you use Veeam or other backup products).
-
- VP, Product Management
- Posts: 6749
- Liked: 1408 times
- Joined: May 04, 2011 8:36 am
- Full Name: Andreas Neufert
- Location: Germany
- Contact:
Re: VeeamSure (problems with Fedora VM's not getting IP in LAB)
Maybe this is helpful as Fedora is somehow related to RHEL:
vmware-vsphere-f24/rhel6-surebackup-t11681.html
vmware-vsphere-f24/rhel6-surebackup-t11681.html
-
- Service Provider
- Posts: 50
- Liked: 4 times
- Joined: Apr 25, 2022 6:18 pm
- Full Name: Bostjan UNIJA
- Contact:
Re: VeeamSure (problems with Fedora VM's not getting IP in LAB)
Thank you for your reply.
I have read: "vmware-vsphere-f24/rhel6-surebackup-t11681.html" and the last comment is unfortunatelly not answered.
We are dealing with similiar situation here, 1 network card on Linux Fedora VM, but this company also uses docker in this environment.
Although the idea was to exclude docker networks with vmware-tools config file, I was wondering if "bind the IP address to the PCI NIC slot" will then disable docker environment inside that VM?
I have read: "vmware-vsphere-f24/rhel6-surebackup-t11681.html" and the last comment is unfortunatelly not answered.
We are dealing with similiar situation here, 1 network card on Linux Fedora VM, but this company also uses docker in this environment.
Although the idea was to exclude docker networks with vmware-tools config file, I was wondering if "bind the IP address to the PCI NIC slot" will then disable docker environment inside that VM?
-
- Service Provider
- Posts: 50
- Liked: 4 times
- Joined: Apr 25, 2022 6:18 pm
- Full Name: Bostjan UNIJA
- Contact:
Re: VeeamSure (problems with Fedora VM's not getting IP in LAB)
Ok, please ignore my previous question.
We are making some progress, but still stucked in Veeam VirtualLab environment where Fedora VM still doesn't get an IP.
Let me describe the steps that I did.
I have created a dedicated LAB environment with ESXI, VCENTER, VBR, and 1 VM (Fedora-39).
On this Fedora VM I have checked PCI slot for NIC with lspci -D and the output was: "0000:0b:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)"
On Fedora VM inside folder: "/etc/sysconfig/network-scripts/" there is just txt readme file which can be ignored.
Inside: "/etc/udev/rules.d" I had created: 70-persistent-net.rules file with content:
SUBSYSTEM=="net", ACTION=="add", KERNELS=="0000:0b:00.0", NAME="Mrezna1"
If I reboot "production" Fedora VM and run command: ifconfig -a, I get output which throws me the correct NAME, so this looks like it works as expected, NIC is linked directly to a PCI SLOT that was defined in 70-persistent-net.rules.
Output of ifconfig -a:
Mrezna1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.124.142 netmask 255.255.255.0 broadcast 192.168.124.255
inet6 fe80::fc92:a1c1:d26d:16c6 prefixlen 64 scopeid 0x20<link>
ether 00:50:56:8f:d4:10 txqueuelen 1000 (Ethernet)
RX packets 1652 bytes 1167965 (1.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 441 bytes 53724 (52.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:fd:fb:60:d3 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 4 bytes 240 (240.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4 bytes 240 (240.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Correct NIC works even after rebooting the VM.
After the changes above we have ran a fresh backup of this Fedora VM and ran the SureBackup job, but unfortunatelly inside the VirtualLAB FedoraVM (still) does not get an IP, but we do see the NAME of network card: "Mrezna1" so it looks like it's reading correctly from PCI SLOT /70-persistent-net.rules file, just doesn't get an IP.
And if we run lspci -D inside Fedora VM (Virtual LAb environment) we get the same ID for "PCI SLOT": "0000:0b:00.0" as in production, just no IP on that interface...
And... After writing everything above, we have double checked the SureBackup VirtualLab settings, and in the Advanced Single Host Mode / Network Intefraces / VMnics enabled also DHCP on that interface, a VM now gets an IP also inside the Virtual Lab.
This looks very promising, need to check the same thing in customer's production environment.
We are making some progress, but still stucked in Veeam VirtualLab environment where Fedora VM still doesn't get an IP.
Let me describe the steps that I did.
I have created a dedicated LAB environment with ESXI, VCENTER, VBR, and 1 VM (Fedora-39).
On this Fedora VM I have checked PCI slot for NIC with lspci -D and the output was: "0000:0b:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)"
On Fedora VM inside folder: "/etc/sysconfig/network-scripts/" there is just txt readme file which can be ignored.
Inside: "/etc/udev/rules.d" I had created: 70-persistent-net.rules file with content:
SUBSYSTEM=="net", ACTION=="add", KERNELS=="0000:0b:00.0", NAME="Mrezna1"
If I reboot "production" Fedora VM and run command: ifconfig -a, I get output which throws me the correct NAME, so this looks like it works as expected, NIC is linked directly to a PCI SLOT that was defined in 70-persistent-net.rules.
Output of ifconfig -a:
Mrezna1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.124.142 netmask 255.255.255.0 broadcast 192.168.124.255
inet6 fe80::fc92:a1c1:d26d:16c6 prefixlen 64 scopeid 0x20<link>
ether 00:50:56:8f:d4:10 txqueuelen 1000 (Ethernet)
RX packets 1652 bytes 1167965 (1.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 441 bytes 53724 (52.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:fd:fb:60:d3 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 4 bytes 240 (240.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4 bytes 240 (240.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Correct NIC works even after rebooting the VM.
After the changes above we have ran a fresh backup of this Fedora VM and ran the SureBackup job, but unfortunatelly inside the VirtualLAB FedoraVM (still) does not get an IP, but we do see the NAME of network card: "Mrezna1" so it looks like it's reading correctly from PCI SLOT /70-persistent-net.rules file, just doesn't get an IP.
And if we run lspci -D inside Fedora VM (Virtual LAb environment) we get the same ID for "PCI SLOT": "0000:0b:00.0" as in production, just no IP on that interface...
And... After writing everything above, we have double checked the SureBackup VirtualLab settings, and in the Advanced Single Host Mode / Network Intefraces / VMnics enabled also DHCP on that interface, a VM now gets an IP also inside the Virtual Lab.
This looks very promising, need to check the same thing in customer's production environment.
-
- VP, Product Management
- Posts: 6013
- Liked: 2843 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: VeeamSure (problems with Fedora VM's not getting IP in LAB)
Modern Fedora versions have migrated almost exclusively to NetworkManager for network configuration, so a lot of the information from older distros doesn't actually apply, although the udev rules should still help, which it seems like your testing does show is working.
However, I think there might be an easier way to accomplish this. One thing I wasn't clear about, do you want it to use DHCP and was the system originally using DHCP in production? Or do you have static IP and are trying to get the same IP in the lab?
However, I think there might be an easier way to accomplish this. One thing I wasn't clear about, do you want it to use DHCP and was the system originally using DHCP in production? Or do you have static IP and are trying to get the same IP in the lab?
-
- Service Provider
- Posts: 50
- Liked: 4 times
- Joined: Apr 25, 2022 6:18 pm
- Full Name: Bostjan UNIJA
- Contact:
Re: VeeamSure (problems with Fedora VM's not getting IP in LAB)
Yes, system is using DHCP in production which is bad. It will be configured to use static IP's.
-
- VP, Product Management
- Posts: 6749
- Liked: 1408 times
- Joined: May 04, 2011 8:36 am
- Full Name: Andreas Neufert
- Location: Germany
- Contact:
Re: VeeamSure (problems with Fedora VM's not getting IP in LAB)
I am sorry, then the above configurations around port binding did not matter for scenarios with DHCP usage.
In the virtual lab configuration wizard you can enable DHCP and then the servers will get IP addresses.
To ensure that DHCP works well the subnet mask need to be correct and the Veeam appliance need to get the production gateway IP addresses in the virtual lab network wizard page.
In the virtual lab configuration wizard you can enable DHCP and then the servers will get IP addresses.
To ensure that DHCP works well the subnet mask need to be correct and the Veeam appliance need to get the production gateway IP addresses in the virtual lab network wizard page.
Who is online
Users browsing this forum: Bing [Bot] and 54 guests