custom hw vs cots

Michael T. Halligan michael at halligan.org
Tue Aug 23 20:51:03 PDT 2005


>- you're right in most things you're saying for
>  cots vs reinventing the wheel
>
>  but since i gave you the impression we prefer
>  to "reinvent the wheel" and/or avoid dell/hp/ibm/metoo,
>  here's some whacky background info for more
>  poking and shaking of the fingers :-)
>  
>

That's fair :) Excuse me if I come off confrontational, most conversations
on this list have tended to be children having temper tantrums. Most threads
on all the mailing lists I'm on, except for the Infrastructures list 
(which has
very few threads) tend to be people basing wild assumptions on very small
practices/implementations. That, and it's been a long weekend.

>- in the days before *.com,  there were very few ( 2 )
>  vendors of reasonable 1U chassis, so we made our own
>  custom chassis per customer requests to solve
>  the heat problem of p3-500 and AMD athlons and
>  provide more than 1 disk
>
>	- there were 2 very very bad 1u chassis
>	from the industry standard rackmount vendors
>
>	- there was NO 1U power supplies so we
>	designed our own ... and of course, by 
>	the time we finished the prelim designs,
>	we could buy $150 prototype 1U power supplies
>	which we did and placed orders for more 
>	which dropped in price overnight
>  
>

Interesting.  My approach would have been quite different, however. I'd
have found a manufacturer who already made power supplies, give them
the requirements, then make enough calls to show that the demand would
exist if they provided the part. Manufacturing is a rough industry. I grew
up in that world.  You stop losing economies of scale in some areas very,
very quickly.

I remember when 1U chassis' became the thing in the beowulf world,
and all of a sudden power, heat, and floor requirements mattered (it
was rather fun to throw 38 45lb 1Us into a 42U chassis that had 50lbs
of other gear, and weighed 75lbs, box it up into a crate, and try to safely
put that into a truck backed-up to a non-functioning loading dock, then
hope it wouldn't fall through our customer's floors)

>- now of course, everybody has rackmounts chassis
>  and if you look closely, its mostly the same chassis
>  re-branded as antoher "me too" vendor
>	- there are those that make their own chassis
>	( dell, tyan, mb-vendors, ibm, hp, etc )
>  
>

Until 3 years ago, Dell purchased their chassis from Synnex I believe. IBM,
I'm pretty sure, still buys their chassis from an asian supplier.. I 
know a company
that sells thousands IBM-equivalent servers without the IBM logo, that 
they didn't
buy from IBM.. I'm pretty sure that they even use the same motherboards, 
but don't
include drivers for the on-board diagnostics or management.

>- for tommorrow's market:
>
>	- we're reinventing the wheel again, for blade
>	boxes that will support 10 independent systems
>	with 15TB in 4U of space or about 160TB per cabinet
>
>	- i'm again, assuming we'd have to design our
>	own custom 24port gigE switch since the budgets
>	doesn't allow $$$ for fancy cots 24-port gigE switches
>
>	but, that is the minor hangup for now ...
>	where we do twiddle dee.. twiddle dah to see
>	who comes up with a cheap 24-port gigE that has
>	the sustained performance between any 2 nodes
>
>	- i think plugging 16 disks into one motherboard
>	will be serious bottleneck for managing 15TB of disks
>
>  
>
How could there possibly be any good reason to design your own blade 
system? Explain the
business reasoning here. I'm still not sold that Blades are a good idea. 
Power is expensive,
very expensive.  PCs are lousy at power usage.. Maybe a G5 based blade 
system, yeah, but
x86? Space is very cheap. The premium on blades far outweighs the cost 
of space. The only
good reason I could see for blades is theoretically reduced latency.. 
But if latency is that important,
Numa equivalents might be an equivalent solution in speed and pricing, 
especially factoring
in reliability.

Is gigE necessary? For what reasons? If it's throughput, then you should 
look into Myrinet or Quadrics.

16 disks on one motherboard is a bad solution.  Managed Jbods, or a SAN 
with their own controllers are
probably better ideas.


>- can you/we buy a 100TB system off the shelf ???
>
>	- yes .. but its not cheap ...
>
>	and we're way underpriced so we're hoping to
>	get the hw contract
>
>	add back in for the marketing/sales and tradeshow costs
>	and we'd be in the same price point as
>	everybody else ... ( 100x mark up of hw costs )
>
>	- it's targeted for a major colo facility 
>
>- than add "services" to the custom hardware and hopefully
>  everybody are happy campers
>
>  
>
Really depends on the 100TB. Speed, reliability, or just space? Disk 
storage solutions
might be one where building your own solution isn't too bad... Mainly 
after doing a lot
of research on disk prices/margins. Still, you just answered your own 
question about
marketing, sales, support, etc.  I'm sure you've realized that unless 
you sell N of these, your
margins approach zero, and you need to make it up with services or 
consulting.  Perhaps
partnering with a larger provider, and getting a good VAR discount would 
be a better use
of time while providing a stronger profit margin.





More information about the Baylisa mailing list