Archive for the ‘Muggle Tech’ Category

one drobo down, one to go.

Tuesday, July 20th, 2010

My first Drobo has completed it’s syncing. I have moved all the data off to another host. I broke my rule and went to fry’s for a new array. I picked up a ReadyNAS NV+. Nobody else had one in town. Man, sales tax has gone way up from the last time I have bought electronics.

I setup the system and started my rsyncs. This morning, I reviewed their status and ran into some permissions problems. Apparently, the group ID’s didn’t match so it could not “chgrp” the files. Not a big deal, just changed them on the local drive and re-sync’d. I ran into an interesting issue. I normally do a “du” to confirm that the right number of bytes copied over. I was getting larger values on the destination than the source and was scratching my head. It turns out that every directory was a larger size than the source. The source is an HPFS+ volume and the destination is an AFP volume. Apparently the AFP volumes have more attributes than HPFS+ so the inode needs to store more data and hence the file is bigger.

I only have two more volumes to copy but I now need to figure out how to move the Time Machine sparse bundles over to the ReadyNas as the volume for TimeMachine doesn’t appear to be a normal share.

the recovery continues.

Tuesday, July 20th, 2010

After repairing the volume, I was still unable to mount the volume. Turns out the journal was corrupt. Again, I don’t get how this can happen. I was able to issue the following command:

% /System/Library/Filesystems/hfs.fs/hfs.util -N /dev/disk1s2
Turned off the journaling bit for /dev/disk1s2

I then re-ran the fsck option and then remount the volume. It complained again that the volume was corrupt, but this time let me mount the volume. I am now syncing the data over to a new set of drives so that I can re-purpose these drives.

thank you sooo much SDG&E!

Monday, July 19th, 2010

Please note the sarcasm 😉

Today, I was awoke to the sounds of UPS beeps. Apparently, today was the day they planned to swap out our meters for the 3rd time. Well I figured everything would be ok since most of my equipment is all on UPS.

Well I discovered that two pieces of equipment (My mac mini and one of my drobos rebooted.) I will be checking the cables and battery on them later today. After the mini rebooted, the Drobo volume did NOT mount. It saw the device and the volume, but refused to mount. I ran the DiskUtility software and did a disk repair but it failed.

So I brushed up on my UNIX skills and tried to fix this manually. I ran “fsck” but it reported a bad super-block, I tried running with an alternative super-block and that failed…. I then learned that fsck is not always successful and you should run “fsck_hfs”. I ran this and it complained that the catalog was corrupted. Huh? I thought I had a journaling file-system, this should never happen. 99.9% of all articles on the web claim I am totally screwed and to re-format and restore.

Well that’s the stupidest thing I have heard and I dug into the manual pages. I found an option “-r” that rebuilds the catalog. So far it has been rebuilding for the last two hours. It’s encouraging as it’s way past the point of where it failed before. Because this is a 650 gigs worth of data, it may take several more hours. Unlike Solaris and Linux it doesn’t give me any idea on the progress, so I just have to wait it out.

Once this is all done, I plan to consolidate all of my data onto a single Drobo and use the 2nd unit as an rsync copy. Ultimately (when I can afford it), I plan to purchase an eSata array and plug that back into a Solaris host and run ZFS on it.

some RAID is more equal than others.

Monday, July 19th, 2010

Today I was dealing with hard drive issues on my Drobo. I own both a Drobo and a Drobo2. I thought I was having issues with my mac-mini running Snow-Leopard Server. The boot drive was my Drobo2 on Firewire. It was running fine in the beginning, but got worse over time. Finally this system started crawling to a halt.

I received a red-warning light on my Drobo and of course I replaced the drive. Then my Drobo tells me that it will take 143 hours to recover. I figure the problem is the drive I replaced it with (WD green, which are known to have issues) so I grabbed a Seagate drive and put that in, but received the same message. I have four 500gig drives in this unit, so it should take no more than about 24 hours (or less). The same drives in my Tivo Raid array take about 8 hours to sync.

So, I think it’s time to retire his drobo units. I am not an isolated incident. Apparently, this is so well known on the net that some have talked about a class action lawsuit to get their money back or at least upgraded to the newer version that the company says fixes all these problems. These units didn’t come cheap and I, as a consumer, will not shell out another $500+ to upgrade in the hopes that this will fix the problem. I use my Drobos as centralized storage that is shared out to both AFP and CIFS. I will not use their DroboShare, because that is the biggest piece of carp (sic) ever produced in the last 10 years.

So, I think I will go back to traditional raid-1/5 systems. I really like the raid array I have for my Tivo. The only bright side to this entire headache is my data is still there, it just takes 24 hours to copy a few gigs off the dang thing. I have been looking at the ReadyNAS NV+/Pro but I am thinking, “I never had an issue when running Solaris, not a single one”. So I think I will invest in eSata arrays and a decent low-power PC that has a decent amount of memory. I can then retire my mac Mini and my HP desktop. The only requirement is that I be able to support AFP volumes and TimeMachine shares.

Quickbooks needs 4 cpu’s (really)?

Friday, May 14th, 2010

I was looking at specs for quickbooks premier for a client of mine. I found this on Intuit’s web site:

Requirements
# Windows XP (SP2), Vista or 7
# 2.0 GHz Pentium 4 processor (2.4 GHz recommended)
# 512 MB RAM (1 GB recommended) for a single user, 1 GB of RAM for multiple, concurrent users

Seriously, I need 4 processors to run a single copy of an accounting package? This seems like way overkill.