$ sudo qemu-system-i386 -m 512M -cdrom filename.iso
It’s not super fast but it works. I like it.
$ sudo qemu-system-i386 -m 512M -cdrom filename.iso
It’s not super fast but it works. I like it.
I use Virtualbox on my Linux box that runs a virtual WinXP guest, primarily only used for the Outlook client as we use Exchange for email. I was constantly running out of space on the system drive c:\ as I only created a 10GB drive. Thinking I was going to have to re-do the entire XP installation to get a bigger disk, I’ve really been putting it off . However today, I found a howto guide that expands it without losing any data. This worked great, zero issues. Here’s a link to that howto. Uses gparted, pretty slick..
http://www.my-guides.net/en/content/view/122/26/
Update: So tried the clonehd method, and both are pretty much the same…where clonehd is doing the smaething as the copy is in gparted. I’m on the fence with what one is actually better, as I thought they both took about the same amount of time.. Here’s the instructions I used.
http://trivialproof.blogspot.com/2011/01/resizing-virtualbox-virtual-hard-disk.html
A core dump is very helpful for helping us tracking down crashes of VirtualBox. To create a core dump, start VirtualBox from a command line (e.g. xterm):
|
1 2 3 4 5 6 |
$ ulimit -c unlimited $ sudo su # echo -n 1 > /proc/sys/kernel/core_uses_pid # echo -n 1 > /proc/sys/fs/suid_dumpable # exit $ VirtualBox |
or better start the VM directly:
|
1 2 3 4 5 6 |
$ ulimit -c unlimited $ sudo su # echo -n 1 > /proc/sys/kernel/core_uses_pid # echo -n 1 > /proc/sys/fs/suid_dumpable # exit $ /usr/lib/virtualbox/VirtualBox -startvm VM_NAME |
Ensure that no startup script (~/.bashrc, ~/.bash_profile, ~/.profile) contains an instruction like ulimit -c 0 as the limit cannot be increased once it was set to zero.
Starting with version 2.0.0, the VirtualBox processes are started suid root, that is, with permissions to do things that “normal” applications cannot. This is the reason for the
|
1 2 3 |
$ sudo su $ echo -n 1 > /proc/sys/fs/suid_dumpable $ exit |
before starting the VM/GUI (note that sudo echo will not do what we want here).
When VirtualBox or one of its processes crashes, a file core.<pid> is created in the current directory. Be aware that core dumps can be very huge. Please compress the file before submitting it to a bug report. Or better don’t attach the file to a report. Note that this core dump can contain a memory dump of your guest which can include sensitive information. Send it to frank _dot_ mehnert _at_ oracle _dot_ com if the compressed file is smaller than 5MB. Contact me directly otherwise.
If several core files are created, you can check which process created them using the command
|
1 |
$ file core.<pid> |
to be sure of the right one to send.
To create a core dump on Mac OS X, start VirtualBox from a command line:
|
1 2 |
$ ulimit -c unlimited $ VirtualBox |
or better start the VM directly:
|
1 2 |
$ ulimit -c unlimited $ /Applications/VirtualBox.app/Contents/MacOS/VirtualBox -startvm VM_NAME |
Ensure that no startup script (~/.bashrc, ~/.bash_profile, ~/.profile) contains an instruction like ulimit -c 0 as the limit cannot be increased once it was set to zero.
The core files can be found in the /cores folder.
To get core on Solaris host run the following command as root
|
1 2 |
# coreadm -g /var/cores/core.%f.%p -i core.%f.%p \ -e global -e process -e global-setid -e proc-setid -e log |
The cores will now be placed in /var/cores/ folder with the global dumps will go into /var/crash/<hostname>
System core dumps need to be enabled via dumpadm. The important thing is to have “Savecore enabled” to “yes” (use dumpadm -y). The config. should look something like this:
|
1 2 3 4 5 6 |
# dumpadm Dump content: kernel pages Dump device: /dev/zvol/dsk/rpool/dump (dedicated) Savecore directory: /var/crash/myhostname Savecore enabled: yes Save compressed: on |
Sometimes it is required to force a VirtualBox process to terminate, for example, a VM hangs for some unknown reason. On Linux, this can be done as follows:
|
1 2 3 4 5 6 |
$ ulimit -c unlimited $ sudo echo -n 1 > /proc/sys/fs/suid_dumpable $ /usr/lib/virtualbox/VirtualBox -startvm VM_NAME & $ pidof VirtualBox 7145 $ kill -4 7145 |
As an alternative to kill you can do
|
1 2 3 |
$ pidof VirtualBox 7145 $ gcore 7145 |
On Mac OS X:
|
1 2 3 4 5 |
$ ulimit -c unlimited $ /Applications/VirtualBox.app/Contents/MacOS/VirtualBox -startvm VM_NAME & $ ps aux|grep VirtualBox ... 7145 ... VirtualBox ... $ kill -4 7145 |
On Solaris:
|
1 2 3 4 5 |
# ulimit -c unlimited # /opt/VirtualBox/amd64/bin/VirtualBox -startvm VM_NAME & # ps -ef|grep VirtualBox ... 7145 ... VirtualBox ... # kill -4 7145 |
You can find result core file according location specified in coreadm
|
1 |
# coreadm |
Passing the signal number 4 (SIGILL) is essential! The same applies to the alternative frontends VBoxHeadless and VBoxSDL.
Please visit this Microsoft site for more details about minidumps
Please visit the Microsoft Performance Team blog for more details about Application crash dumps
The section To collect user-mode dumps of the Microsoft site explains how to enable user mode dumps on Windows Vista and Windows 7
More detailed information about collecting user-mode dumps is available on the MSDN site
Our standard build = 30GB. How do you expand the file system without losing data and/or the VM.
Our test
30GB standard VM
Need to increase /motr to 100GB
Before you start, verify that the new space can be seen. Here is the un-modified fdisk.
Either reboot, or just enter this in. This will rescan the disk to find the new space.
echo 1 > /sys/block/sda/device/rescan
|
1 2 3 4 5 6 7 8 9 10 |
bwmx01d:~ # <strong>fdisk -l</strong> Disk /dev/sda: 139.6 GB, 139586437120 bytes 255 heads, 63 sectors/track, 16970 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x18d9c094 Device Boot Start End Blocks Id System /dev/sda1 * 1 8 64228+ 83 Linux /dev/sda2 9 3916 31391010 8e Linux LVM |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
bwmx01d:~ # <strong>df -h</strong> Filesystem Size Used Avail Use% Mounted on /dev/mapper/system-LVRoot 4.8G 2.2G 2.5G 48% / devtmpfs 2.0G 116K 2.0G 1% /dev tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/mapper/system-LVhome 2.0G 46M 1.8G 3% /home /dev/mapper/system-LVmotr 114G 836M 108G 1% /motr /dev/mapper/system-LVtmp 985M 66M 869M 8% /tmp /dev/mapper/system-LVvar 3.9G 142M 3.6G 4% /var /dev/sda1 63M 15M 45M 25% /lvmboot bwmx01d:~ # |
#1Â Start by creating a new primary with your new disk space, this also can be an extended
p = print
n = create new partition
w = write
t = type (8e = LVM)
Here’s the original without the new partition.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
bwmx01d:~ # <strong>fdisk /dev/sda</strong> The number of cylinders for this disk is set to 16970. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): <strong>p</strong> Disk /dev/sda: 139.6 GB, 139586437120 bytes 255 heads, 63 sectors/track, 16970 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x18d9c094 Device Boot Start End Blocks Id System /dev/sda1 * 1 8 64228+ 83 Linux /dev/sda2 9 3916 31391010 8e Linux LVM Command (m for help): |
Add a new partition with the new space, we will create this on /dev/sda3. n = create new partition, p = print, t = disk type
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
Command (m for help): <strong>p</strong> Disk /dev/sda: 139.6 GB, 139586437120 bytes 255 heads, 63 sectors/track, 16970 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x18d9c094 Device Boot Start End Blocks Id System /dev/sda1 * 1 8 64228+ 83 Linux /dev/sda2 9 3916 31391010 8e Linux LVM Command (m for help): <strong>n</strong> Command action e extended p primary partition (1-4) <strong>p</strong> Partition number (1-4): <strong>3</strong> First cylinder (3917-16970, default 3917): <strong><hit enter></strong> Using default value 3917 Last cylinder, +cylinders or +size{K,M,G} (3917-16970, default 16970): <strong><hit enter></strong> Using default value 16970 Command (m for help): <strong>p</strong> Disk /dev/sda: 139.6 GB, 139586437120 bytes 255 heads, 63 sectors/track, 16970 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x18d9c094 Device Boot Start End Blocks Id System /dev/sda1 * 1 8 64228+ 83 Linux /dev/sda2 9 3916 31391010 8e Linux LVM /dev/sda3 3917 16970 104856255 83 Linux |
Now that it’s created we need to specify the partition type and switch that to Linux LVM. Type = 8e for Linux LVM
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
Command (m for help): <strong>p</strong> Disk /dev/sda: 139.6 GB, 139586437120 bytes 255 heads, 63 sectors/track, 16970 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x18d9c094 Device Boot Start End Blocks Id System /dev/sda1 * 1 8 64228+ 83 Linux /dev/sda2 9 3916 31391010 8e Linux LVM /dev/sda3 3917 16970 104856255 83 Linux Command (m for help): <strong>t </strong> Partition number (1-4): <strong>3</strong> Hex code (type L to list codes): <strong>8e</strong> Changed system type of partition 3 to 8e (Linux LVM) Command (m for help): <strong>p</strong> Disk /dev/sda: 139.6 GB, 139586437120 bytes 255 heads, 63 sectors/track, 16970 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x18d9c094 Device Boot Start End Blocks Id System /dev/sda1 * 1 8 64228+ 83 Linux /dev/sda2 9 3916 31391010 8e Linux LVM /dev/sda3 3917 16970 104856255 8e Linux LVM |
Looks good.  Now we need to write this partition to disk using the w
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
Command (m for help): <strong>p</strong> Disk /dev/sda: 139.6 GB, 139586437120 bytes 255 heads, 63 sectors/track, 16970 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x18d9c094 Device Boot Start End Blocks Id System /dev/sda1 * 1 8 64228+ 83 Linux /dev/sda2 9 3916 31391010 8e Linux LVM /dev/sda3 3917 16970 104856255 8e Linux LVM Command (m for help): <strong>w</strong> The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) Syncing disks. |
run partprobe -s
which will re-scan the new partition table. A reboot will do the samething, but why reboot?
Increase the physical layer
|
1 2 3 |
bwmx01d:~ # <strong>pvcreate /dev/sda3</strong> No physical volume label read from /dev/sda3 Physical volume "/dev/sda3" successfully created |
Increase the “system” volume. System = the volume name
|
1 2 |
bwmx01d:~ # <strong>vgextend system /dev/sda3</strong> Volume group "system" successfully extended |
Increase the Logical Volume
|
1 2 3 |
bwmx01d:~ # <strong>lvextend /dev/system/LVmotr /dev/sda3</strong> Extending logical volume LVmotr to 115.62 GB Logical volume LVmotr successfully resized |
Finally increase the file system.
|
1 2 3 4 5 6 |
bwmx01d:~ # <strong>resize2fs /dev/mapper/system-LVmotr</strong> resize2fs 1.41.9 (22-Aug-2009) Filesystem at /dev/mapper/system-LVmotr is mounted on /motr; on-line resizing required old desc_blocks = 1, new_desc_blocks = 8 Performing an on-line resize of /dev/mapper/system-LVmotr to 30309376 (4k) blocks. The filesystem on /dev/mapper/system-LVmotr is now 30309376 blocks long. |
Note: I had issues trying to use resize2fs on a physical host, as I was attempting to expand file system by adding a new disk to the system. So try this as last step if that fails..
|
1 2 |
bwmx01d:~ # resize_reiserfs /dev/system/motr resize_reiserfs 3.6.21 (2009 www.namesys.com) |
resize_reiserfs: On-line resizing finished successfully.
That’s it.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
bwmx01d:~ # <strong>df -h</strong> Filesystem Size Used Avail Use% Mounted on /dev/mapper/system-LVRoot 4.8G 2.2G 2.5G 48% / devtmpfs 2.0G 116K 2.0G 1% /dev tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/mapper/system-LVhome 2.0G 46M 1.8G 3% /home /dev/mapper/system-LVmotr 114G 836M 108G 1% /motr /dev/mapper/system-LVtmp 985M 66M 869M 8% /tmp /dev/mapper/system-LVvar 3.9G 142M 3.6G 4% /var /dev/sda1 63M 15M 45M 25% /lvmboot |