2014-09-12

Windows 8.1 - "Preparing Automatic Repair" Loop

Today, when I rebooted my Windows 8.1 instance (running under VMware Fusion 7 Pro), I was greeted by the "Preparing Automatic Repair" during boot. I keep backups of my VMware instances so I could easily restore the instance from backup - and all my source code is also kept in remote git repositories. But, I wanted to test if I could salvage this instance, and maybe learning something in the process.

Say hello to: Preparing Automatic Repair
 

I tried the various options presented but could not get out of "Preparing Automatic Repair"; 
on every reboot it would enter this state.




In addition, when I tried to do a repair, I was notified that I had to little space left (3.4 GB left). 
First, I tried to remove some unnecessary user produced files, by dropping into the command console and deleting them, but even 6GB free space was not enough. So, instead I decided to increase the VMware disk size for this instance by about 20GB. This allowed repair to start.

However, when I tried to complete a "fresh repair", it failed right after 1%.




  
So, I rebooted, and then through the Advanced options dropped into the command console, and ran these commands:




I also disabled "Automatic Repair", 



When I next rebooted, I was greeted by this message during boot:

 

     Boot critical file is corrupt: C:\Windows\system32\drivers\intelide.sys



I choose once again to run Repair





and this time it completed. 
All installed applications, such as Visual Studio, were gone, but my user data/source code was still there.

Next up, restore my VMware instance backup.




2014-09-10

VMware Fusion 7 Pro


I use VMware Fusion on my MacBook Pro to do both testing in Linux and Visual Studio development in Windows.

I keep my instances on external SSD disk:

  • Buffalo Ministation 2.5" 256gb SSD Thunderbolt USB 3.0

  



Today I upgraded from VMware Fusion 6 Pro to 7 Pro, and one cool feature with the recently 
released VMware Fusion 7 Pro is the ability to:

  • Access virtual machines hosted on VMware vSphere, ESXi or Workstation

At home, I also run an ESXi home-lab server 

So thru the new VMware Fusion 7 Pro I can now access the instances running on this ESXi home-lab server:


 CPU, DISK and MEMORY usage on ESXi server:



And  now I can also access the consoles of my instances on my ESXi home-lab server directly from Fusion 7 Pro.



VMware Fusion 7 Pro also adds the ability to:

  • Upload or download virtual machines to vSphere, ESXi or Workstation





@NorSoulx (twitter)

Soulful Goodness
VMware vSphere, ESXi or Workstation
VMware vSphere, ESXi or Workstation

2014-03-06

Benchmarking 8 x Intel 520 SSDs on ESXi home-lab server with IOZONE

Introduction

For a long time I've used VMware instances to prototype and test systems and network topologies I've used in my work. I also use them to run my own home projects.

The list includes:
  • Redundant load-balancers and firewalls (iptables, Pacemaker, Corosync, ldirectord, iproute2)
  • NoSQL solutions, such as Riak and Tokyo Cabinet.
  • Oracle, PostgreSQL, MySQL databases
  • IBM GPFS clustered filesystem with clustered NFS on top using ISCSI nodes.
  • Firewall and NSM solution to DMZ each of my kids' sub-networks at home.

Recently I upgraded my ESXi 5 home-lab server with 8 x Intel 520 240GB SSDs to bring some more IO speed to my testing facilities, but also enough room to test NoSQL and distributed storage solutions with close to 2 TB of data. Sidenote: These 8 SSD drives cost in total about the same amount as one Seagate 2GB SCSI disk I bought in 1994.


The SSDs are connected to an LSI 9211-8i card as JBODs, and this card is configured in ESXi passthrough-mode. To get a feel for what kind of performance (IOPS) I can expect from these 8 SSDs when run in parallel, my first test uses IOZONE. This server has 16GB RAM, and 1 Xeon E3-1220 @ 3.10GHz CPU (4 cores).

The ESXi server also contains an Areca raid card with 256MB cache (w/BBU) connected to an external enclosure with 8 SATA disks, so these 8 SSDs should deliver a nice speedup when testing distributed storage solutions.

For this test I created an Ubuntu 12.04 server instance with 12GB RAM and 4 cores.
The other instances running on this ESXi host were shutdown during the IOZONE testing.


I'm using two ICYDOCK hot-swap cabinets for the 8 SSDs.

Partitioning: gdisk

All 8 SSDs were partitioned using gdisk.

$ sudo gdisk -l /dev/sdb
GPT fdisk (gdisk) version 0.8.1

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 468862128 sectors, 223.6 GiB
Logical sector size: 512 bytes
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 468862094
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048       468862094   223.6 GiB   8300  Linux filesystem

Filesystem: XFS

I've been using XFS on Linux servers since around 2001, and have been very satisfied with it.

#!/bin/bash
DEVICES="sdb sdc sdd sde sdf sdg sdh sdi"
devno=1
for dev in $DEVICES; do
  echo /dev/${dev}1 
  mkfs.xfs -L ssd0$devno /dev/${dev}1
  let devno=$devno+1 
done

I used nobarrier for this benchmark; I've always used enterprise hardware with BBU cache when using this XFS mount option, but this test system will not contain any production-critical data so I've opted for the extra performance boost.

$ mount|grep xfs|sort 
/dev/sdb1 on /ssd01 type xfs (rw,noatime,nobarrier)
/dev/sdc1 on /ssd02 type xfs (rw,noatime,nobarrier)
/dev/sdd1 on /ssd03 type xfs (rw,noatime,nobarrier)
/dev/sde1 on /ssd04 type xfs (rw,noatime,nobarrier)
/dev/sdf1 on /ssd05 type xfs (rw,noatime,nobarrier)
/dev/sdg1 on /ssd06 type xfs (rw,noatime,nobarrier)
/dev/sdh1 on /ssd07 type xfs (rw,noatime,nobarrier)
/dev/sdi1 on /ssd08 type xfs (rw,noatime,nobarrier)

IOZONE

For this test I used IOZONE version 3.420 (compiled from source).

Setup configuration:
  • Ubuntu RAM size:      12 GB
  • Total file-size used: 24 GB
  • Record-size used:      4 KB
Linux IO scheduler is set to noop for all SSD devices during testing.

Below are the results from two of the test-runs I did.

Benchmark IOPS with 1 SSD drive

Ran 1 IOZONE process with 24 threads.
Each thread writes a 1048576 Kbyte file in 4 Kbyte records.

iozone -l24 -T -k 1000 -s1g -r4k -i0 -i1 -i2 -O -+u -R -b res.xls



Benchmark IOPS with 8 SSD drives

Ran 8 IOZONE processes in the parallel (one process on each SSD), each running 12 threads.
Each thread writes a 262144 Kbyte file in 4 Kbyte records.

iozone -l12 -T -k 1000 -s256m -r4k -i0 -i1 -i2 -O -+u -R -b res.xls



All 8 IOZONE processes started at the same second and ended within 1 second a part.

This is by no means the absolute result, but an indication of the performance the server hardware is capable of delivering with the configuration and setup chosen. YMMV.

Twitter: @NorSoulX

Soulful Goodness