thank you sooo much SDG&E!

July 19th, 2010

Please note the sarcasm 😉

Today, I was awoke to the sounds of UPS beeps. Apparently, today was the day they planned to swap out our meters for the 3rd time. Well I figured everything would be ok since most of my equipment is all on UPS.

Well I discovered that two pieces of equipment (My mac mini and one of my drobos rebooted.) I will be checking the cables and battery on them later today. After the mini rebooted, the Drobo volume did NOT mount. It saw the device and the volume, but refused to mount. I ran the DiskUtility software and did a disk repair but it failed.

So I brushed up on my UNIX skills and tried to fix this manually. I ran “fsck” but it reported a bad super-block, I tried running with an alternative super-block and that failed…. I then learned that fsck is not always successful and you should run “fsck_hfs”. I ran this and it complained that the catalog was corrupted. Huh? I thought I had a journaling file-system, this should never happen. 99.9% of all articles on the web claim I am totally screwed and to re-format and restore.

Well that’s the stupidest thing I have heard and I dug into the manual pages. I found an option “-r” that rebuilds the catalog. So far it has been rebuilding for the last two hours. It’s encouraging as it’s way past the point of where it failed before. Because this is a 650 gigs worth of data, it may take several more hours. Unlike Solaris and Linux it doesn’t give me any idea on the progress, so I just have to wait it out.

Once this is all done, I plan to consolidate all of my data onto a single Drobo and use the 2nd unit as an rsync copy. Ultimately (when I can afford it), I plan to purchase an eSata array and plug that back into a Solaris host and run ZFS on it.

some RAID is more equal than others.

July 19th, 2010

Today I was dealing with hard drive issues on my Drobo. I own both a Drobo and a Drobo2. I thought I was having issues with my mac-mini running Snow-Leopard Server. The boot drive was my Drobo2 on Firewire. It was running fine in the beginning, but got worse over time. Finally this system started crawling to a halt.

I received a red-warning light on my Drobo and of course I replaced the drive. Then my Drobo tells me that it will take 143 hours to recover. I figure the problem is the drive I replaced it with (WD green, which are known to have issues) so I grabbed a Seagate drive and put that in, but received the same message. I have four 500gig drives in this unit, so it should take no more than about 24 hours (or less). The same drives in my Tivo Raid array take about 8 hours to sync.

So, I think it’s time to retire his drobo units. I am not an isolated incident. Apparently, this is so well known on the net that some have talked about a class action lawsuit to get their money back or at least upgraded to the newer version that the company says fixes all these problems. These units didn’t come cheap and I, as a consumer, will not shell out another $500+ to upgrade in the hopes that this will fix the problem. I use my Drobos as centralized storage that is shared out to both AFP and CIFS. I will not use their DroboShare, because that is the biggest piece of carp (sic) ever produced in the last 10 years.

So, I think I will go back to traditional raid-1/5 systems. I really like the raid array I have for my Tivo. The only bright side to this entire headache is my data is still there, it just takes 24 hours to copy a few gigs off the dang thing. I have been looking at the ReadyNAS NV+/Pro but I am thinking, “I never had an issue when running Solaris, not a single one”. So I think I will invest in eSata arrays and a decent low-power PC that has a decent amount of memory. I can then retire my mac Mini and my HP desktop. The only requirement is that I be able to support AFP volumes and TimeMachine shares.

bogus bandwidth speed tests.

July 18th, 2010

I was re-wiring my network today and realized I had put a 10 mbps hub in between my cable modem and router. I use this in case I need to monitor traffic between my router and TWC. What was most interesting is that I had be told to run speed tests from http://speedtest.rrsan.com/ and was getting speeds in the range of 15-22mbps. Well clearly anyone with 1st grade math skills can see the problem here….

I intend to install a bandwidth throttler and see if I can figure out what is going on. What this has taught me is that I can’t believe their web site.

Do you know what you just said?

July 13th, 2010

This week, I have heard a cliche spoken several times by people. I am not sure why, but it’s the 5th time I have heard it. Well, my issue is not that I heard it, but all five people GOT IT WRONG.

The phrase I heard was “It’s a difficult road to hoe”. The phrase is not road, but row. If you would stop and think about the phrase and not mindlessly repeat something you don’t understand you would realize what it means.

Farmers do not take hoes to a local road and cut trenches in them. They use a hoe to prepare for planting, in rows.

BikeMS Training begins

May 14th, 2010

It’s that time of year again. Time to start training for the BikeMS ride. I have created a Meetup.Com group for training schedules and discussion. We will have our first meetup this Saturday at Mirarmar lake for beginning cyclists.

Check out the new site at: http://www.meetup.com/BikeMS/