Discussion:
External drive enclosures + Sun Server for mass storage
David J. Orman
2007-01-20 01:59:13 UTC
Permalink
Hi,

I'm looking at Sun's 1U x64 server line, and at most they support two drives. This is fine for the root OS install, but obviously not sufficient for many users.

Specifically, I am looking at the: http://www.sun.com/servers/x64/x2200/ X2200M2.

It only has "Riser card assembly with two internal 64-bit, 8-lane, low-profile, half length PCI-Express slots" for expansion.

What I'm looking for is a SAS/SATA card that would allow me to add an external SATA enclosure (or some such device) to add storage. The supported list on the HCL is pretty slim, and I see no PCI-E stuff. A card that supports SAS would be *ideal*, but I can settle for normal SATA too.

So, anybody have any good suggestions for these two things:

#1 - SAS/SATA PCI-E card that would work with the Sun X2200M2.
#2 - Rack-mountable external enclosure for SAS/SATA drives, supporting hot swap of drives.

Basically, I'm trying to get around using Sun's extremely expensive storage solutions while waiting on them to release something reasonable now that ZFS exists.

Cheers,
David


This message posted from opensolaris.org
Jason J. W. Williams
2007-01-20 02:24:15 UTC
Permalink
Hi David,

I don't know if your company qualifies as a startup under Sun's regs
but you can get an X4500/Thumper for $24,000 under this program:
http://www.sun.com/emrkt/startupessentials/

Best Regards,
Jason
Post by David J. Orman
Hi,
I'm looking at Sun's 1U x64 server line, and at most they support two drives. This is fine for the root OS install, but obviously not sufficient for many users.
Specifically, I am looking at the: http://www.sun.com/servers/x64/x2200/ X2200M2.
It only has "Riser card assembly with two internal 64-bit, 8-lane, low-profile, half length PCI-Express slots" for expansion.
What I'm looking for is a SAS/SATA card that would allow me to add an external SATA enclosure (or some such device) to add storage. The supported list on the HCL is pretty slim, and I see no PCI-E stuff. A card that supports SAS would be *ideal*, but I can settle for normal SATA too.
#1 - SAS/SATA PCI-E card that would work with the Sun X2200M2.
#2 - Rack-mountable external enclosure for SAS/SATA drives, supporting hot swap of drives.
Basically, I'm trying to get around using Sun's extremely expensive storage solutions while waiting on them to release something reasonable now that ZFS exists.
Cheers,
David
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
David J. Orman
2007-01-20 20:59:51 UTC
Permalink
Post by Jason J. W. Williams
Hi David,
I don't know if your company qualifies as a startup
under Sun's regs
but you can get an X4500/Thumper for $24,000 under
http://www.sun.com/emrkt/startupessentials/
Best Regards,
Jason
I'm already a part of the Startup Essentials program. Perhaps I should have been more clear, my apologies, I am not looking for 48 drives worth of storage. This is beyond our means to purchase at this point, regardless of the $/GB. I do agree, it is quite a good deal.

I was talking about the huge gap in storage solutions from Sun for the middle-ground. While $24,000 is a wonderful deal, it's absolute overkill for what I'm thinking about doing. I was looking for more around 6-8 drives.

David


This message posted from opensolaris.org
Erik Trimble
2007-01-20 02:47:30 UTC
Permalink
Post by David J. Orman
Hi,
I'm looking at Sun's 1U x64 server line, and at most they support two drives. This is fine for the root OS install, but obviously not sufficient for many users.
Specifically, I am looking at the: http://www.sun.com/servers/x64/x2200/ X2200M2.
It only has "Riser card assembly with two internal 64-bit, 8-lane, low-profile, half length PCI-Express slots" for expansion.
What I'm looking for is a SAS/SATA card that would allow me to add an external SATA enclosure (or some such device) to add storage. The supported list on the HCL is pretty slim, and I see no PCI-E stuff. A card that supports SAS would be *ideal*, but I can settle for normal SATA too.
#1 - SAS/SATA PCI-E card that would work with the Sun X2200M2.
#2 - Rack-mountable external enclosure for SAS/SATA drives, supporting hot swap of drives.
Basically, I'm trying to get around using Sun's extremely expensive storage solutions while waiting on them to release something reasonable now that ZFS exists.
Cheers,
David
Not to be picky, but the X2100 and X2200 series are NOT
designed/targeted for disk serving (they don't even have redundant power
supplies). They're compute-boxes. The X4100/X4200 are what you are
looking for to get a flexible box more oriented towards disk i/o and
expansion.

That said (if you're set on an X2200 M2), you are probably better off
getting a PCI-E SCSI controller, and then attaching it to an external
SCSI->SATA JBOD. There are plenty of external JBODs out there which use
Ultra320/Ultra160 as a host interface and SATA as a drive interface.
Sun will sell you a supported SCSI controller with the X2200 M2 (the
"Sun StorageTek PCI-E Dual Channel Ultra320 SCSI HBA").

SCSI is far better for a host attachment mechanism than eSATA if you
plan on doing more than a couple of drives, which it sounds like you
are. While the SCSI HBA is going to cost quite a bit more than an eSATA
HBA, the external JBODs run about the same, and the total difference is
going to be $300 or so across the whole setup (which will cost you $5000
or more fully populated). So the cost to use SCSI vs eSATA as the host-
attach is a rounding error.
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
Frank Cusack
2007-01-20 07:28:23 UTC
Permalink
Post by Erik Trimble
Not to be picky, but the X2100 and X2200 series are NOT
designed/targeted for disk serving (they don't even have redundant power
supplies). They're compute-boxes. The X4100/X4200 are what you are
looking for to get a flexible box more oriented towards disk i/o and
expansion.
But x4100/x4200 only accept expensive 2.5" SAS drives, which have
small capacities. That doesn't seem oriented towards disk serving.

-frank
Rich Teer
2007-01-20 18:18:12 UTC
Permalink
Post by Frank Cusack
But x4100/x4200 only accept expensive 2.5" SAS drives, which have
small capacities. [...]
... and only 2 or 4 drives each. Hence my blog entry a while back,
wishing for a Sun-badged 1U SAS JBOD with room for 8 drives. I'm
amazed that Sun hasn't got a product to fill this obvious (to me
at least) hole in their storage catalogue.
--
Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member

President,
Rite Online Inc.

Voice: +1 (250) 979-1638
URL: http://www.rite-group.com/rich
David J. Orman
2007-01-20 21:07:27 UTC
Permalink
Post by Frank Cusack
Post by Frank Cusack
But x4100/x4200 only accept expensive 2.5" SAS
drives, which have
Post by Frank Cusack
small capacities. [...]
... and only 2 or 4 drives each. Hence my blog entry
a while back,
wishing for a Sun-badged 1U SAS JBOD with room for 8
drives. I'm
amazed that Sun hasn't got a product to fill this
obvious (to me
at least) hole in their storage catalogue.
--
Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member
President,
Rite Online Inc.
Voice: +1 (250) 979-1638
URL: http://www.rite-group.com/rich
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
ss
This is exactly what I am looking for. I apparently was not clear in my original post. I am looking for a 6-8 drive external solution to tie into Sun servers. The existing Sun solutions at this range are very expensive. For instance, the 3511 is ~$37000 for 12x500gb drives.

I can buy good quality Seagate drives for $200 each. That comes to the grand total of $2400. Somehow I doubt the enclosure/drive controllers are worth ~$34,000. It's an insane markup.

That's why I was asking for an external JBOD solution. The Sun servers I've looked at are all priced excellently, and I'd love to use them - but the storage solutions are a bit crazy. Not to mention, I don't want to get tied into FC, seeing as 10gE is around the corner. I'd rather use some kind of external interface that's reasonable.

On that note, I've recently read it might be the case that the 1u sun servers do not have hot-swappable disk drives... is this really true? That makes this whole plan silly, I can just go out and buy a Supermicro machine and save money all around, and have the 6-8 drives in the same box as the server.

Thanks,
David


This message posted from opensolaris.org
Frank Cusack
2007-01-20 22:18:09 UTC
Permalink
On January 20, 2007 1:07:27 PM -0800 "David J. Orman"
Post by David J. Orman
On that note, I've recently read it might be the case that the 1u sun
servers do not have hot-swappable disk drives... is this really true?
Only for the x2100 (and x2100m2). It's not that the hardware isn't
hot-swappable, it's that Solaris doesn't support it. If you run
Windows you will get hot swap.

-frank
Erik Trimble
2007-01-20 22:48:36 UTC
Permalink
Post by Frank Cusack
On January 20, 2007 1:07:27 PM -0800 "David J. Orman"
Post by David J. Orman
On that note, I've recently read it might be the case that the 1u sun
servers do not have hot-swappable disk drives... is this really true?
Only for the x2100 (and x2100m2). It's not that the hardware isn't
hot-swappable, it's that Solaris doesn't support it. If you run
Windows you will get hot swap.
-frank
_______________________________________________
I believe this also applies to the X2200 M2 as well. Essentially, all
the low-end x64 servers using SATA have Nvidia chipsets which
theoretically support Hot-swap of SATA; as noted, the Windows drivers
do support this feature, while the Solaris 10 drivers don't (and, I
don't know if there are plans to add this feature or not).

Personally, I've always been a bit nervous of using chipset-based RAID
and expecting Hot-swap to actually, particularly with SATA. I've been
bitten on various different (non-Sun) hardware trying this, and it has
made me gun-shy of thinking I can actually pull a SATA drive while its
mirror is still mounted...
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
Toby Thain
2007-01-21 02:15:22 UTC
Permalink
Post by Erik Trimble
Post by Frank Cusack
On January 20, 2007 1:07:27 PM -0800 "David J. Orman"
Post by David J. Orman
On that note, I've recently read it might be the case that the 1u sun
servers do not have hot-swappable disk drives... is this really true?
Only for the x2100 (and x2100m2). It's not that the hardware isn't
hot-swappable, it's that Solaris doesn't support it. If you run
Windows you will get hot swap.
-frank
_______________________________________________
I believe this also applies to the X2200 M2 as well. Essentially,
all the low-end x64 servers using SATA have Nvidia chipsets which
theoretically support Hot-swap of SATA; as noted, the Windows
drivers do support this feature, while the Solaris 10 drivers don't
(and, I don't know if there are plans to add this feature or not).
Personally, I've always been a bit nervous of using chipset-based
RAID and expecting Hot-swap to actually, particularly with SATA.
I've been bitten on various different (non-Sun) hardware trying
this, and it has made me gun-shy of thinking I can actually pull a
SATA drive while its mirror is still mounted...
Some of us don't give a damn about "chipset RAID" and want hotswap/
hotplug drives with SVM and/or ZFS.
Post by Erik Trimble
To be clear, Sun defines "hot swap" as a device which can be
inserted or
removed without system administration tasks required.
Sun defines "hot plug" as a device which can be inserted or removed
without
causing damage or interruption to a running system, but which may
require
system administration. The vast majority of the disks Sun sells
are hot
pluggable.
To be clear: the X2100 drives are neither "hotswap" nor "hotplug"
under Solaris. Replacing a failed drive requires a reboot.

--Toby
Post by Erik Trimble
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Frank Cusack
2007-01-22 17:39:19 UTC
Permalink
To be clear: the X2100 drives are neither "hotswap" nor "hotplug" under
Solaris. Replacing a failed drive requires a reboot.
Also, adding a drive that wasn't present at boot requires a reboot.

-frank
Frank Cusack
2007-01-23 03:07:20 UTC
Permalink
On January 22, 2007 12:12:19 PM -0600 Brian Hechinger
Post by Frank Cusack
To be clear: the X2100 drives are neither "hotswap" nor "hotplug" under
Solaris. Replacing a failed drive requires a reboot.
Also, adding a drive that wasn't present at boot requires a reboot.
This couldn't possibly be true, unless we've taken major steps backwards
as this has always been possible (at least on sparc)
It is true. Try it.

[Sorry to send a reply to a personal mail back to the list, but your
email address bounces

450 <***@ia64.int.dittman.net>: Recipient address rejected: Domain not
found]

-frank
Peter Karlsson
2007-01-23 05:02:48 UTC
Permalink
Hi Frank,

try man devfsadm, it will update devfs with your new disk drives. disks
is an older command that does about the same thing.

Cheers,
Peter
Post by Frank Cusack
On January 22, 2007 12:12:19 PM -0600 Brian Hechinger
Post by Frank Cusack
Post by Toby Thain
To be clear: the X2100 drives are neither "hotswap" nor "hotplug"
under
Post by Toby Thain
Solaris. Replacing a failed drive requires a reboot.
Also, adding a drive that wasn't present at boot requires a reboot.
This couldn't possibly be true, unless we've taken major steps backwards
as this has always been possible (at least on sparc)
It is true. Try it.
[Sorry to send a reply to a personal mail back to the list, but your
email address bounces
not found]
-frank
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Frank Cusack
2007-01-23 05:07:53 UTC
Permalink
yes I am an experienced Solaris admin and know all about devfsadm :-)
and the older disks command.

It doesn't help in this case. I think it's a BIOS thing. Linux and
Windows can't see IDE drives that aren't there at boot time either,
and on Solaris the SATA controller runs in some legacy mode so I guess
that's why you can't see the newly added drive.

Unfortunately all my x2100 hardware is in production and I can't
readily retest this to verify.

-frank

On January 23, 2007 12:02:48 PM +0700 Peter Karlsson
Post by Jason J. W. Williams
Hi Frank,
try man devfsadm, it will update devfs with your new disk drives. disks
is an older command that does about the same thing.
Cheers,
Peter
Post by Frank Cusack
On January 22, 2007 12:12:19 PM -0600 Brian Hechinger
Post by Frank Cusack
Post by Toby Thain
To be clear: the X2100 drives are neither "hotswap" nor "hotplug"
under
Post by Toby Thain
Solaris. Replacing a failed drive requires a reboot.
Also, adding a drive that wasn't present at boot requires a reboot.
This couldn't possibly be true, unless we've taken major steps backwards
as this has always been possible (at least on sparc)
It is true. Try it.
[Sorry to send a reply to a personal mail back to the list, but your
email address bounces
not found]
-frank
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bart Smaalders
2007-01-23 18:51:20 UTC
Permalink
Post by Frank Cusack
yes I am an experienced Solaris admin and know all about devfsadm :-)
and the older disks command.
It doesn't help in this case. I think it's a BIOS thing. Linux and
Windows can't see IDE drives that aren't there at boot time either,
and on Solaris the SATA controller runs in some legacy mode so I guess
that's why you can't see the newly added drive.
Unfortunately all my x2100 hardware is in production and I can't
readily retest this to verify.
-frank
This is exactly the issue; some of the simple SATA drives
are used in PATA compatibility mode. The ide driver doesn't
know a thing about hot anything, so we would need a proper
SATA driver for these chips. Since they work (with the exception
of hot *) it is difficult to prioritize this work above getting
some other piece of hardware working under Solaris. In addition,
switching drivers & bios configs during upgrade is a non-trivial
exercise.


- Bart
--
Bart Smaalders Solaris Kernel Performance
***@cyber.eng.sun.com http://blogs.sun.com/barts
Toby Thain
2007-01-23 22:11:24 UTC
Permalink
Post by Bart Smaalders
Post by Frank Cusack
yes I am an experienced Solaris admin and know all about devfsadm :-)
and the older disks command.
It doesn't help in this case. I think it's a BIOS thing. Linux and
Windows can't see IDE drives that aren't there at boot time either,
and on Solaris the SATA controller runs in some legacy mode so I guess
that's why you can't see the newly added drive.
Unfortunately all my x2100 hardware is in production and I can't
readily retest this to verify.
-frank
This is exactly the issue; some of the simple SATA drives
are used in PATA compatibility mode. The ide driver doesn't
know a thing about hot anything, so we would need a proper
SATA driver for these chips. Since they work (with the exception
of hot *) it is difficult to prioritize this work
Disappointing but not completely surprising - "What do you expect,
it's an entry level product, not a high end product."

Still, would be nice for those of us who bought them. And judging by
other posts on this thread it seems just about everyone assumes
hotswap "just works".

--Toby
Post by Bart Smaalders
above getting
some other piece of hardware working under Solaris. In addition,
switching drivers & bios configs during upgrade is a non-trivial
exercise.
- Bart
--
Bart Smaalders Solaris Kernel Performance
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Frank Cusack
2007-01-23 22:18:19 UTC
Permalink
It's interesting the topics that come up here, which really have little to
do with zfs. I guess it just shows how great zfs is. I mean, you would
never have a ufs list that talked about the merits of sata vs sas and what
hardware do i buy. Also interesting is that zfs exposes hardware bugs
yet I don't think that's what really drives the hardware questions here.

-frank
Bart Smaalders
2007-01-24 03:10:52 UTC
Permalink
Post by Frank Cusack
It's interesting the topics that come up here, which really have little to
do with zfs. I guess it just shows how great zfs is. I mean, you would
never have a ufs list that talked about the merits of sata vs sas and what
hardware do i buy. Also interesting is that zfs exposes hardware bugs
yet I don't think that's what really drives the hardware questions here.
Actually, I think it's the easy admin of more that a simple mirror....
so all of a sudden it's simple to deal with multiple drives, add more
later, etc... so connectivity to low end boxes becomes important.

Also, of course, SATA is still relatively new and we don't yet
have extensive controller support (understatement).


- Bart
--
Bart Smaalders Solaris Kernel Performance
***@cyber.eng.sun.com http://blogs.sun.com/barts
Frank Cusack
2007-01-25 07:09:28 UTC
Permalink
Post by Toby Thain
Still, would be nice for those of us who bought them. And judging by
other posts on this thread it seems just about everyone assumes hotswap
"just works".
hot *plug* :-)

-frank
Toby Thain
2007-01-25 11:48:11 UTC
Permalink
On January 23, 2007 8:11:24 PM -0200 Toby Thain
Post by Toby Thain
Still, would be nice for those of us who bought them. And judging by
other posts on this thread it seems just about everyone assumes hotswap
"just works".
hot *plug* :-)
Hmm, yes, sloppy of me.
--T
-frank
To be clear, Sun defines "hot swap" as a device which can be
inserted or
removed without system administration tasks required.
Sun defines "hot plug" as a device which can be inserted or removed
without
causing damage or interruption to a running system, but which may
require
system administration.
Richard Elling
2007-01-21 01:58:53 UTC
Permalink
Post by Frank Cusack
On January 20, 2007 1:07:27 PM -0800 "David J. Orman"
Post by David J. Orman
On that note, I've recently read it might be the case that the 1u sun
servers do not have hot-swappable disk drives... is this really true?
Yes.
Post by Frank Cusack
Only for the x2100 (and x2100m2). It's not that the hardware isn't
hot-swappable, it's that Solaris doesn't support it. If you run
Windows you will get hot swap.
No.

To be clear, Sun defines "hot swap" as a device which can be inserted or
removed without system administration tasks required.

Sun defines "hot plug" as a device which can be inserted or removed without
causing damage or interruption to a running system, but which may require
system administration. The vast majority of the disks Sun sells are hot
pluggable.

That said, this definition is not always used consistently, as is the case
with the x2100. I filed a bug against the docs in this case, and unfortunately
it was closed as "will not fix." :-(
-- richard
Rich Teer
2007-01-21 02:12:14 UTC
Permalink
Post by Richard Elling
To be clear, Sun defines "hot swap" as a device which can be inserted or
removed without system administration tasks required.
Sun defines "hot plug" as a device which can be inserted or removed without
causing damage or interruption to a running system, but which may require
system administration. The vast majority of the disks Sun sells are hot
pluggable.
OK; given the above definitions, could you please confirm one way
or another that the disks in the X2100 are hot pluggable? In other
words, if I have a pair of mirrored drives in an X2100 and one of
those drives dies, can I take out and replace the defective drive
without down time?
--
Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member

President,
Rite Online Inc.

Voice: +1 (250) 979-1638
URL: http://www.rite-group.com/rich
Toby Thain
2007-01-21 02:55:30 UTC
Permalink
Post by Rich Teer
Post by Richard Elling
To be clear, Sun defines "hot swap" as a device which can be
inserted or
removed without system administration tasks required.
Sun defines "hot plug" as a device which can be inserted or
removed without
causing damage or interruption to a running system, but which may require
system administration. The vast majority of the disks Sun sells are hot
pluggable.
OK; given the above definitions, could you please confirm one way
or another that the disks in the X2100 are hot pluggable? In other
words, if I have a pair of mirrored drives in an X2100 and one of
those drives dies, can I take out and replace the defective drive
without down time?
NO - unless you're running Windows AND "chipset RAID" (or whatever
you want to call it). This is easily verified by experiment with
Solaris 10.

More information via links I posted earlier in this thread, or buried
in X2100 documentation.

--Toby
Post by Rich Teer
--
Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member
President,
Rite Online Inc.
Voice: +1 (250) 979-1638
URL: http://www.rite-group.com/rich
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
C***@Sun.COM
2007-01-21 10:36:11 UTC
Permalink
Post by Richard Elling
That said, this definition is not always used consistently, as is the case
with the x2100. I filed a bug against the docs in this case, and unfortunately
it was closed as "will not fix." :-(
In the context of a hardware platform it makes little sense to
distinguish between hot-plug and hot-swap. The distinction is purely
based on the capabilities of the software.

Casper
Richard Elling
2007-01-22 18:03:14 UTC
Permalink
Post by C***@Sun.COM
Post by Richard Elling
That said, this definition is not always used consistently, as is the case
with the x2100. I filed a bug against the docs in this case, and unfortunately
it was closed as "will not fix." :-(
In the context of a hardware platform it makes little sense to
distinguish between hot-plug and hot-swap. The distinction is purely
based on the capabilities of the software.
Agree. I filed the bug against the docs with the justification that it
confuses customers. The bug was closed and we continue to have confused
customers :-(
Post by C***@Sun.COM
To be clear: the X2100 drives are neither "hotswap" nor "hotplug" under
Solaris. Replacing a failed drive requires a reboot.
I do not believe this is true, though I don't have one to test. If this
were true, then we would have had to rewrite the disk drivers to not allow
us to open a device more than once, even if we also closed the device.
I can't imagine anyone allowing such code to be written.

However, I don't believe this is the context of the issue. I believe that
this release note deals with the use of NVRAID (NVidia's MCP RAID controller)
which does not have a systems management interface under Solaris. The
solution is to not use NVRAID for Solaris. Rather, use the proven techniques
that we've been using for decades to manage hot plugging drives.

In short, the release note is confusing, so ignore it. Use x2100 disks as
hot pluggable like you've always used hot plug disks in Solaris.
-- richard
Frank Cusack
2007-01-22 18:54:03 UTC
Permalink
On January 22, 2007 10:03:14 AM -0800 Richard Elling
Post by Richard Elling
To be clear: the X2100 drives are neither "hotswap" nor "hotplug" under
Solaris. Replacing a failed drive requires a reboot.
I do not believe this is true, though I don't have one to test.
Well if you won't accept multiple technically adept people's word on it,
I highly suggest you get one to test instead of speculating.
Post by Richard Elling
If this
were true, then we would have had to rewrite the disk drivers to not allow
us to open a device more than once, even if we also closed the device.
I can't imagine anyone allowing such code to be written.
Obviously you have not rewritten the disk drivers to do this, so this is
the wrong line of reasoning.
Post by Richard Elling
However, I don't believe this is the context of the issue. I believe that
this release note deals with the use of NVRAID (NVidia's MCP RAID controller)
which does not have a systems management interface under Solaris. The
solution is to not use NVRAID for Solaris. Rather, use the proven techniques
that we've been using for decades to manage hot plugging drives.
No, the release note is not about NVRAID.
Post by Richard Elling
In short, the release note is confusing, so ignore it. Use x2100 disks as
hot pluggable like you've always used hot plug disks in Solaris.
Again, NO these drives are not hot pluggable and the release note is
accurate. PLEASE get a system to test. Or take our word for it.

-frank
Jason J. W. Williams
2007-01-22 19:02:03 UTC
Permalink
Hi Frank,

I'm sure Richard will check it out. He's a very good guy and not
trying to jerk you around. I'm sure the hostility isn't warranted. :-)

Best Regards,
Jason
Post by Frank Cusack
On January 22, 2007 10:03:14 AM -0800 Richard Elling
Post by Richard Elling
To be clear: the X2100 drives are neither "hotswap" nor "hotplug" under
Solaris. Replacing a failed drive requires a reboot.
I do not believe this is true, though I don't have one to test.
Well if you won't accept multiple technically adept people's word on it,
I highly suggest you get one to test instead of speculating.
Post by Richard Elling
If this
were true, then we would have had to rewrite the disk drivers to not allow
us to open a device more than once, even if we also closed the device.
I can't imagine anyone allowing such code to be written.
Obviously you have not rewritten the disk drivers to do this, so this is
the wrong line of reasoning.
Post by Richard Elling
However, I don't believe this is the context of the issue. I believe that
this release note deals with the use of NVRAID (NVidia's MCP RAID controller)
which does not have a systems management interface under Solaris. The
solution is to not use NVRAID for Solaris. Rather, use the proven techniques
that we've been using for decades to manage hot plugging drives.
No, the release note is not about NVRAID.
Post by Richard Elling
In short, the release note is confusing, so ignore it. Use x2100 disks as
hot pluggable like you've always used hot plug disks in Solaris.
Again, NO these drives are not hot pluggable and the release note is
accurate. PLEASE get a system to test. Or take our word for it.
-frank
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Frank Cusack
2007-01-22 19:12:33 UTC
Permalink
I certainly did NOT mean any hostility whatsoever. I highly value what
Richard offers in this forum. I'm just frustrated at the misinformation
which is being presented as authoritative. Repeatedly.

But to be clear, in my mind Richard is one of the "good ones" and I
eagerly read what he has to say -- to the point that when he chimes
in on a thread I've been ignoring I start reading it, and always
learn something. OK enough of that, back to the bashing! :-)

-frank

On January 22, 2007 12:02:03 PM -0700 "Jason J. W. Williams"
Post by Jason J. W. Williams
Hi Frank,
I'm sure Richard will check it out. He's a very good guy and not
trying to jerk you around. I'm sure the hostility isn't warranted. :-)
Best Regards,
Jason
Post by Frank Cusack
On January 22, 2007 10:03:14 AM -0800 Richard Elling
Post by Richard Elling
Post by Toby Thain
To be clear: the X2100 drives are neither "hotswap" nor "hotplug"
under Solaris. Replacing a failed drive requires a reboot.
I do not believe this is true, though I don't have one to test.
Well if you won't accept multiple technically adept people's word on it,
I highly suggest you get one to test instead of speculating.
Post by Richard Elling
If this
were true, then we would have had to rewrite the disk drivers to not
allow us to open a device more than once, even if we also closed the
device. I can't imagine anyone allowing such code to be written.
Obviously you have not rewritten the disk drivers to do this, so this is
the wrong line of reasoning.
Post by Richard Elling
However, I don't believe this is the context of the issue. I believe
that this release note deals with the use of NVRAID (NVidia's MCP RAID
controller)
which does not have a systems management interface under Solaris. The
solution is to not use NVRAID for Solaris. Rather, use the proven techniques
that we've been using for decades to manage hot plugging drives.
No, the release note is not about NVRAID.
Post by Richard Elling
In short, the release note is confusing, so ignore it. Use x2100
disks as hot pluggable like you've always used hot plug disks in
Solaris.
Again, NO these drives are not hot pluggable and the release note is
accurate. PLEASE get a system to test. Or take our word for it.
-frank
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Frank Cusack
2007-01-22 19:28:37 UTC
Permalink
Post by Frank Cusack
Post by Richard Elling
In short, the release note is confusing, so ignore it. Use x2100
disks as hot pluggable like you've always used hot plug disks in
Solaris.
Again, NO these drives are not hot pluggable and the release note is
accurate. PLEASE get a system to test. Or take our word for it.
hmm I think I may have just figured out the problem here.

YES the x2100 is that bad. I too found it quite hard to believe that
Sun would sell this without hot plug drives. It seems like a step
backwards.

(and of course I don't mean that the x2100 is awful, it's a great
hardware and very well priced ... now if only hot plug worked!)

My main issue is that the x2100 is advertised as hot plug working.
You have to dig pretty deep -- deeper than would be expected of a
"typical" buyer -- to find that Solaris does not support it.

-frank
Jason J. W. Williams
2007-01-22 19:42:04 UTC
Permalink
Hi Guys,

The original X2100 was a pile of doggie doo-doo. All of our problems
with it go back to the atrocious quality of the nForce 4 Pro chipset.
The NICs in particular are just crap. The M2s are better, but the
MCP55 chipset has not resolved all of its flakiness issues. That being
said Sun designed that case with hot-plug bays, if Solaris isn't going
to support it, then those shouldn't be there in my opinion.

Best Regards,
Jason
Post by Frank Cusack
Post by Frank Cusack
Post by Richard Elling
In short, the release note is confusing, so ignore it. Use x2100
disks as hot pluggable like you've always used hot plug disks in
Solaris.
Again, NO these drives are not hot pluggable and the release note is
accurate. PLEASE get a system to test. Or take our word for it.
hmm I think I may have just figured out the problem here.
YES the x2100 is that bad. I too found it quite hard to believe that
Sun would sell this without hot plug drives. It seems like a step
backwards.
(and of course I don't mean that the x2100 is awful, it's a great
hardware and very well priced ... now if only hot plug worked!)
My main issue is that the x2100 is advertised as hot plug working.
You have to dig pretty deep -- deeper than would be expected of a
"typical" buyer -- to find that Solaris does not support it.
-frank
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Toby Thain
2007-01-22 21:14:12 UTC
Permalink
Post by Frank Cusack
Post by Frank Cusack
Post by Richard Elling
In short, the release note is confusing, so ignore it. Use x2100
disks as hot pluggable like you've always used hot plug disks in
Solaris.
Won't work - some of us have tested it.
Post by Frank Cusack
Post by Frank Cusack
Again, NO these drives are not hot pluggable and the release note is
accurate. PLEASE get a system to test. Or take our word for it.
hmm I think I may have just figured out the problem here.
YES the x2100 is that bad. I too found it quite hard to believe that
Sun would sell this without hot plug drives. It seems like a step
backwards.
(and of course I don't mean that the x2100 is awful, it's a great
hardware and very well priced ... now if only hot plug worked!)
My main issue is that the x2100 is advertised as hot plug working.
Agree 100% with the above. In all other respects I like the X2100.

I've come to accept the lack of hotswap as an indirect consequence of
market segmentation, but it would be great if it worked (like Frank I
saw nothing to indicate the contrary until someone pointed me to fine
print in the release notes, long after we purchased).

--Toby
Post by Frank Cusack
You have to dig pretty deep -- deeper than would be expected of a
"typical" buyer -- to find that Solaris does not support it.
-frank
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
David J. Orman
2007-01-22 19:19:40 UTC
Permalink
Post by Jason J. W. Williams
Hi Frank,
I'm sure Richard will check it out. He's a very good
guy and not
trying to jerk you around. I'm sure the hostility
isn't warranted. :-)
Best Regards,
Jason
I'm very confused now. Do the x2200m2s support "hot plug" of drives or not? I can't believe it's that confusing/difficult. They do or they don't. I don't care if I can just yank a drive in a running system out and have no problems, but I *do* need to be able to swap a failed disk in a mirror without downtime.

Does Sun not have an official word on this? I'm losing my faith very rapidly on the lack of an absolute response to this question.

Along these same lines, what is the roadmap for ZFS on boot disks? I've not heard anything about it in quite some time, and google doesn't yield any current information either.


This message posted from opensolaris.org
Frank Cusack
2007-01-22 19:31:43 UTC
Permalink
On January 22, 2007 11:19:40 AM -0800 "David J. Orman"
Post by David J. Orman
I'm very confused now. Do the x2200m2s support "hot plug" of drives or
not? I can't believe it's that confusing/difficult. They do or they
don't.
Running Solaris, they do not.
Post by David J. Orman
I don't care if I can just yank a drive in a running system out
and have no problems, but I *do* need to be able to swap a failed disk in
a mirror without downtime.
Then the x2100/x2200 is not for you in a standard configuration. You might
be able to find a PCI-E sata card and use that instead of the onboard SATA.
I'm hoping to find such a card.

-frank
David J. Orman
2007-01-22 19:38:35 UTC
Permalink
Post by Frank Cusack
On January 22, 2007 11:19:40 AM -0800 "David J.
Orman"
Post by David J. Orman
I'm very confused now. Do the x2200m2s support "hot
plug" of drives or
Post by David J. Orman
not? I can't believe it's that confusing/difficult.
They do or they
Post by David J. Orman
don't.
Running Solaris, they do not.
Wow. What was/is Sun thinking here? Glad I asked the question happenstance, this makes the X2* series a total waste to purchase.
Post by Frank Cusack
Post by David J. Orman
I don't care if I can just yank a drive in a
running system out
Post by David J. Orman
and have no problems, but I *do* need to be able to
swap a failed disk in
Post by David J. Orman
a mirror without downtime.
Then the x2100/x2200 is not for you in a standard
configuration. You might
be able to find a PCI-E sata card and use that
instead of the onboard SATA.
I'm hoping to find such a card.
I'm not going to pay for hardware that can't handle very basic things such as mirrored boot drives on the vendor-provided OS. That's insane.

Guess it's time to investigate Supermicro and Tyan solutions, startup-essentials program or not - that makes no hardware sense.

Who do I gripe to concerning this (we're starting to stray from discussion pertinent to this list...)? Would I gripe to my sales rep?

Thanks for the clarity,
David


This message posted from opensolaris.org
Frank Cusack
2007-01-22 19:52:49 UTC
Permalink
On January 22, 2007 11:38:35 AM -0800 "David J. Orman"
Post by David J. Orman
Guess it's time to investigate Supermicro and Tyan solutions,
startup-essentials program or not - that makes no hardware sense.
I know it seems ridiculous to HAVE to buy a 3rd party card, but come
on it is only $50 or so. Assuming you don't need both pci slots for
other uses.

I personally wouldn't want to deal with "PC" hardware suppliers directly.
Putting together and maintaining those kinds of systems is a PITA. The
$50 is worth it. Assuming it will work. Especially under the startup
program you're going to have as good or better prices from Sun, and
good support.

-frank
David J. Orman
2007-01-22 20:03:51 UTC
Permalink
Post by Frank Cusack
I know it seems ridiculous to HAVE to buy a 3rd party
card, but come
on it is only $50 or so. Assuming you don't need
both pci slots for
other uses.
I do. Two would have gone to external access for a JBOD (if that ever gets sorted out, haha) - most external adapters seem to support 4 disks.
Post by Frank Cusack
I personally wouldn't want to deal with "PC" hardware
suppliers directly.
Neither would I, hence looking to Sun. :)
Post by Frank Cusack
Putting together and maintaining those kinds of
systems is a PITA.
Well, the Supermicro and Tyan systems generally are not.
Post by Frank Cusack
The
$50 is worth it. Assuming it will work.
Herein lies the problem, more following...
Post by Frank Cusack
Especially
under the startup
program you're going to have as good or better prices
from Sun,
With the program, the prices are still more than I would pay from Supermicro/Tyan, but they are acceptably higher as the integration/support would be much better, of course. Except, this does not seem the case on the X2* series.
Post by Frank Cusack
and
good support.
Here is the big problem. I'd be buying a piece of Sun hardware specifically for this reason, already paying more (even with the startup essentials program) - but do you think Sun is going to support that SAS/SATA controller I bought? If something doesn't work, or later gets broken (for example, the driver disappears/breaks in a later version of Solaris) - what will I do then? Nothing. :) Might as well buy whitebox if I'm going to build the system out in a whitebox-way. ;)

I'd much prefer Sun products, however - I just expect them to support Sun's flagship OS, and be supported fully. I'm going to look into the X4* series assuming they don't have such problems with supported boot disk mirroring/hot plugging/etc.

Thanks,
David


This message posted from opensolaris.org
Frank Cusack
2007-01-22 20:29:01 UTC
Permalink
On January 22, 2007 12:03:51 PM -0800 "David J. Orman"
Post by David J. Orman
I'd much prefer Sun products, however - I just expect them to support
Sun's flagship OS, and be supported fully. I'm going to look into the X4*
series assuming they don't have such problems with supported boot disk
mirroring/hot plugging/etc.
I have had great success with an x4100. It just works. I wish it had
OBP or EFI instead of the BIOS, but whatever.

-frank
Frank Cusack
2007-01-22 20:40:13 UTC
Permalink
On January 22, 2007 12:03:51 PM -0800 "David J. Orman"
Post by David J. Orman
Post by Frank Cusack
I know it seems ridiculous to HAVE to buy a 3rd party
card, but come
on it is only $50 or so. Assuming you don't need
both pci slots for
other uses.
I do. Two would have gone to external access for a JBOD (if that ever
gets sorted out, haha) - most external adapters seem to support 4 disks.
You can't actually use those adapters in the x2100/x2200 or even the
x4100/x4200. The slots are "MD2" low profile slots and the 4 port adapters
require a full height slot. Even the x4600 only has MD2 slots. So you can
only use 2 port adapters. I think there are esata cards that use the
infiniband (SAS style) connector, which will fit in an MD2 slot and still
access 4 drives, but I'm not aware of any that Solaris supports.

Unfortunately, Solaris does not support SATA port multipliers (yet) so
I think you're pretty limited in how many esata drives you can connect.

External SAS is pretty much a non-starter on Solaris (today) so I think
you're left with iscsi or FC if you need more than just a few drives and
you want to use Sun servers instead of building your own.

-frank
David J. Orman
2007-01-22 20:45:29 UTC
Permalink
Post by Frank Cusack
You can't actually use those adapters in the
x2100/x2200 or even the
x4100/x4200. The slots are "MD2" low profile slots
and the 4 port adapters
require a full height slot. Even the x4600 only has
MD2 slots. So you can
only use 2 port adapters. I think there are esata
cards that use the
infiniband (SAS style) connector, which will fit in
an MD2 slot and still
access 4 drives, but I'm not aware of any that
Solaris supports.
Fair enough. :)
Post by Frank Cusack
Unfortunately, Solaris does not support SATA port
multipliers (yet) so
I think you're pretty limited in how many esata
drives you can connect.
Gotcha.
Post by Frank Cusack
External SAS is pretty much a non-starter on Solaris
(today) so I think
you're left with iscsi or FC if you need more than
just a few drives and
you want to use Sun servers instead of building your
own.
iSCSI is interesting to me, are there any JBOD iSCSI external arrays that would allow me to use SAS/SATA drives? I'd actually prefer this to eSATA, as network cable is even more easily dealt with. Toss in one of the dual/quad gigabit cards and run iSCSI to a JBOD filled with SATA/SAS drives == winning solution for me. 4gbit via network avoiding all of the expense of FC is nothing to sniffle at.

Would this still be workable with ZFS? Ideally, I'd like 8-10 drives, running RaidZ2. Know of any products out there I should be looking at, in terms of the hardware enclosure/iSCSI interface for the drives?

David


This message posted from opensolaris.org
Frank Cusack
2007-01-22 20:53:24 UTC
Permalink
On January 22, 2007 12:45:29 PM -0800 "David J. Orman"
Post by David J. Orman
Post by Frank Cusack
External SAS is pretty much a non-starter on Solaris
(today) so I think
you're left with iscsi or FC if you need more than
just a few drives and
you want to use Sun servers instead of building your
own.
I should add: or scsi of course.
Post by David J. Orman
iSCSI is interesting to me, are there any JBOD iSCSI external arrays that
would allow me to use SAS/SATA drives?
There's a few. I'm thinking about a promise m500i. They make smaller
ones also. Not 100% sure though; I might just stick with FC.
Post by David J. Orman
I'd actually prefer this to eSATA,
as network cable is even more easily dealt with. Toss in one of the
dual/quad gigabit cards and run iSCSI to a JBOD filled with SATA/SAS
drives == winning solution for me. 4gbit via network avoiding all of the
expense of FC is nothing to sniffle at.
The promise only has 2 gbit ports, and I'm not sure if they can load
balance. So if you want multi-gigabit performance you should look to
other enclosures.
Post by David J. Orman
Would this still be workable with ZFS?
My understanding is yes, but I haven't done this personally yet.
Post by David J. Orman
Know of any products out there
promise, dnf, nexsan are the ones I know of. Oh I think Adaptec might
have one also.

-frank
mike
2007-01-22 21:17:04 UTC
Permalink
I'm dying here - does anyone know when or even if they will support these?

I had this whole setup planned out but it requires eSATA + port multipliers.

I want to use ZFS, but currently cannot in that fashion. I'd still
have to buy some [more expensive, noisier, bulky internal drive]
solution for ZFS. Unless anyone has other ideas. I'm looking to run a
5-10 drive system (with easy ability to expand) in my home office; not
in a datacenter.

Even opening up to iSCSI seems to not get me much - there aren't any
SOHO type NAS enclosures that act as iSCSI targets. There are however
handfuls of eSATA based 4, 5, and 10 drive enclosures perfect for
this... but all require the port multiplier support.
Post by Frank Cusack
Unfortunately, Solaris does not support SATA port multipliers (yet) so
I think you're pretty limited in how many esata drives you can connect.
Robert Suh
2007-01-23 17:32:32 UTC
Permalink
People trying to hack together systems might want to look
at the HP DL320s

http://h10010.www1.hp.com/wwpc/us/en/ss/WF05a/15351-241434-241475-241475
-f79-3232017.html

12 drive bays, Intel Woodcrest, SAS (and SATA) controller. If you snoop
around, you
might be able to find drive carriers on eBay or elsewhere (*cough*
search "HP drive sleds"
"HP drive carriers") $3k for the chassis. A mini thumper.

Though I'm not sure if Solaris supports the Smart Array controller.

Rob

-----Original Message-----
From: zfs-discuss-***@opensolaris.org
[mailto:zfs-discuss-***@opensolaris.org] On Behalf Of mike
Sent: Monday, January 22, 2007 1:17 PM
To: zfs-***@opensolaris.org
Subject: Re: [zfs-discuss] Re: Re: Re: Re: External drive enclosures +
Sun


I'm dying here - does anyone know when or even if they will support
these?

I had this whole setup planned out but it requires eSATA + port
multipliers.

I want to use ZFS, but currently cannot in that fashion. I'd still
have to buy some [more expensive, noisier, bulky internal drive]
solution for ZFS. Unless anyone has other ideas. I'm looking to run a
5-10 drive system (with easy ability to expand) in my home office; not
in a datacenter.

Even opening up to iSCSI seems to not get me much - there aren't any
SOHO type NAS enclosures that act as iSCSI targets. There are however
handfuls of eSATA based 4, 5, and 10 drive enclosures perfect for
this... but all require the port multiplier support.
Post by Frank Cusack
Unfortunately, Solaris does not support SATA port multipliers (yet) so
I think you're pretty limited in how many esata drives you can
connect.
Jason J. W. Williams
2007-01-23 19:28:02 UTC
Permalink
I believe the SmartArray is an LSI like the Dell PERC isn't it?

Best Regards,
Jason
Post by Robert Suh
People trying to hack together systems might want to look
at the HP DL320s
http://h10010.www1.hp.com/wwpc/us/en/ss/WF05a/15351-241434-241475-241475
-f79-3232017.html
12 drive bays, Intel Woodcrest, SAS (and SATA) controller. If you snoop
around, you
might be able to find drive carriers on eBay or elsewhere (*cough*
search "HP drive sleds"
"HP drive carriers") $3k for the chassis. A mini thumper.
Though I'm not sure if Solaris supports the Smart Array controller.
Rob
-----Original Message-----
Sent: Monday, January 22, 2007 1:17 PM
Subject: Re: [zfs-discuss] Re: Re: Re: Re: External drive enclosures +
Sun
I'm dying here - does anyone know when or even if they will support these?
I had this whole setup planned out but it requires eSATA + port multipliers.
I want to use ZFS, but currently cannot in that fashion. I'd still
have to buy some [more expensive, noisier, bulky internal drive]
solution for ZFS. Unless anyone has other ideas. I'm looking to run a
5-10 drive system (with easy ability to expand) in my home office; not
in a datacenter.
Even opening up to iSCSI seems to not get me much - there aren't any
SOHO type NAS enclosures that act as iSCSI targets. There are however
handfuls of eSATA based 4, 5, and 10 drive enclosures perfect for
this... but all require the port multiplier support.
Post by Frank Cusack
Unfortunately, Solaris does not support SATA port multipliers (yet) so
I think you're pretty limited in how many esata drives you can
connect.
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Toby Thain
2007-01-22 21:16:20 UTC
Permalink
Post by Richard Elling
Post by C***@Sun.COM
Post by Richard Elling
That said, this definition is not always used consistently, as is the case
with the x2100. I filed a bug against the docs in this case, and unfortunately
it was closed as "will not fix." :-(
In the context of a hardware platform it makes little sense to
distinguish between hot-plug and hot-swap. The distinction is purely
based on the capabilities of the software.
Agree. I filed the bug against the docs with the justification that it
confuses customers. The bug was closed and we continue to have confused
customers :-(
Post by C***@Sun.COM
To be clear: the X2100 drives are neither "hotswap" nor "hotplug"
under
Post by C***@Sun.COM
Solaris. Replacing a failed drive requires a reboot.
I do not believe this is true, though I don't have one to test.
This error has been sufficiently addressed in later posts, I think...
Post by Richard Elling
If this
were true, then we would have had to rewrite the disk drivers to not allow
us to open a device more than once, even if we also closed the device.
I can't imagine anyone allowing such code to be written.
However, I don't believe this is the context of the issue. I
believe that
this release note deals with the use of NVRAID (NVidia's MCP RAID controller)
which does not have a systems management interface under Solaris. The
solution is to not use NVRAID for Solaris. Rather, use the proven techniques
that we've been using for decades to manage hot plugging drives.
I have no interest in NVRAID whatsoever. I use SVM and ZFS.

Furthermore, NVRAID is the only method that *does* allow hotswap on
X2100! (Bizarrely, only with Windows, which is of course useless to
me too.)

--Toby
Post by Richard Elling
In short, the release note is confusing, so ignore it. Use x2100 disks as
hot pluggable like you've always used hot plug disks in Solaris.
-- richard
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Dan Mick
2007-01-22 22:28:26 UTC
Permalink
Post by C***@Sun.COM
Post by Richard Elling
That said, this definition is not always used consistently, as is the case
with the x2100. I filed a bug against the docs in this case, and unfortunately
it was closed as "will not fix." :-(
In the context of a hardware platform it makes little sense to
distinguish between hot-plug and hot-swap. The distinction is purely
based on the capabilities of the software.
well, back when I tried (in vain) to apply some common terminology to this,
there were SCSI backplanes that had sequenced logic-vs-power connections on
insert vs. remove, and had "generate an interrupt on insert or remove"
capability....and there were backplanes that did not.

The former class was maybe kinda practical to support unassisted "surprise"
plugging. The latter made it impossible.

I'm sure no one knows what their hardware capabilities ever are, because
the industry has completely failed to come up with sane nomenclature for
the hardware capabilities...and then we multiply that confusion by having
no sane nomenclature for OS capabilities either, and the OS capabilities
are never discussed as though they depend on the hardware, which, of
course, they do.
Erik Trimble
2007-01-20 22:42:38 UTC
Permalink
Post by Rich Teer
Post by Frank Cusack
But x4100/x4200 only accept expensive 2.5" SAS drives, which have
small capacities. [...]
... and only 2 or 4 drives each. Hence my blog entry a while back,
wishing for a Sun-badged 1U SAS JBOD with room for 8 drives. I'm
amazed that Sun hasn't got a product to fill this obvious (to me
at least) hole in their storage catalogue.
The Sun 3120 does 4 x 3.5" SCSI drives in a 1U, and the Sun 3320 does 12
x 3.5" in 2U. Both come in JBOD configs (and the 3320 has HW Raid if you
want it).

Yes, I'm certain that having 8-10 SAS drives in a 1U might be useful; HP
thinks so: the MSA50
(http://h18004.www1.hp.com/storage/disk_storage/msa_diskarrays/drive_enclosures/ma50/index.html)

But, given that Sun doesn't seem to be really targeting Small Business
right now (at least, it appears that way), the 3120 works quite well,
feature-wise, for Medium Business/Enterprise areas..

I priced out the HP MSA-series vs the Sun StorageTek 3000-series, and
the HP stuff is definitely cheaper. By a noticable amount. So I'd say
Sun has less of a hardware selection gap, than a pricing gap. The
current "low end" of the Sun line just isn't cheap enough.



Of course the opinions expressed herein are my own, and I have no
special knowledge of anything relevant to this discussion. (TM)

:-)
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
Rich Teer
2007-01-20 23:43:54 UTC
Permalink
The Sun 3120 does 4 x 3.5" SCSI drives in a 1U, and the Sun 3320 does 12 x
3.5" in 2U. Both come in JBOD configs (and the 3320 has HW Raid if you want
it).
Yep; I know about those products. But the entry level 3120 (with
2 x 73GB disks) has a list price of $5K! I'm a Sun supporter, but
those kind of prices are akin to daylight robbery! Or, to put it
another way, the list price of that simple JBOD is more than twice
as expensive as the X4100--a server it woulr probably be connected
to!

But more to the point, SAS seems to be future, so it would be
really nice to have a Sun SAS JBOD array. As I said in my blog
about this, if Sun could produce an 8-drive SAS 1U JBOD array,
with a starting price (say, 2 x 36GB drives with 2 hot swapable
PSUs) of $2K, they'd sell 'em by the truck load. I mean let's
be honest: when we're talking about low end JBOD arrays, we're
talking about one or two PSUs, some mechanism for holding the
drives, a bit of electronics, and a metal case to put it all in.
No expensive rocket science necessary.
Yes, I'm certain that having 8-10 SAS drives in a 1U might be useful; HP
thinks so: the MSA50
(http://h18004.www1.hp.com/storage/disk_storage/msa_diskarrays/drive_enclosures/ma50/index.html)
Yep, that's what I'm thinking of, only in a nice case that is similar
to the X4100 (for economies of scale and pretty data centers).
But, given that Sun doesn't seem to be really targeting Small Business right
now (at least, it appears that way), the 3120 works quite well, feature-wise,
for Medium Business/Enterprise areas..
But that's the point: Sun IS targeting Small Business: that's
what the Sun Startup Essentials program is all about! Not to
mention the programs aimed at developers.

Agreed, Sun isn't targeting the mum and dad kind of business,
but there are a huge number of businesses that need more storage
than will fit into an X4200/T2000 but less than what's available
with (say) the 3320.
of a hardware selection gap, than a pricing gap. The current "low end" of the
Sun line just isn't cheap enough.
Couldn't agree more.
--
Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member

President,
Rite Online Inc.

Voice: +1 (250) 979-1638
URL: http://www.rite-group.com/rich
Erik Trimble
2007-01-20 22:20:05 UTC
Permalink
On January 19, 2007 6:47:30 PM -0800 Erik Trimble
Post by Erik Trimble
Not to be picky, but the X2100 and X2200 series are NOT
designed/targeted for disk serving (they don't even have redundant power
supplies). They're compute-boxes. The X4100/X4200 are what you are
looking for to get a flexible box more oriented towards disk i/o and
expansion.
But x4100/x4200 only accept expensive 2.5" SAS drives, which have
small capacities. That doesn't seem oriented towards disk serving.
-frank
Those are boot drives, and for those with small amounts of data (and,
you get 73gb and soon 143gb drives in that form factor, which isn't
really any different than typical 3.5" SCSI drive sizes).

No, I was talking about the internal architecture. The X4100/X4200 have
multiple independent I/O buses, with lots of PCI-E and PCI-X slots. So
if you were looking to hook up external storage (which was the original
poster's intent), the X4100/X4200 is a much better match.

-Erik
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
David J. Orman
2007-01-22 19:30:03 UTC
Permalink
Post by Erik Trimble
Not to be picky, but the X2100 and X2200 series are
NOT
designed/targeted for disk serving (they don't even
have redundant power
supplies). They're compute-boxes. The X4100/X4200
are what you are
looking for to get a flexible box more oriented
towards disk i/o and
expansion.
I don't see those as being any better suited to external discs other than:

#1 - They have the capacity for redundant PSUs, which is irrelevant to my needs.
#2 - They only have PCI Express slots, and I can't find any good external SATA interface cards on PCI Express

I can't wrap my head around the idea that I should buy a lot more than I need, which still doesn't serve my purposes. The 4 disks in an x4100 still aren't enough, and the machine is a fair amount more costly. I just need mirrored boot drives, and an external disk array.
Post by Erik Trimble
That said (if you're set on an X2200 M2), you are
probably better off
getting a PCI-E SCSI controller, and then attaching
it to an external
SCSI->SATA JBOD. There are plenty of external JBODs
out there which use
Ultra320/Ultra160 as a host interface and SATA as a
drive interface.
Sun will sell you a supported SCSI controller with
the X2200 M2 (the
"Sun StorageTek PCI-E Dual Channel Ultra320 SCSI
HBA").
SCSI is far better for a host attachment mechanism
than eSATA if you
plan on doing more than a couple of drives, which it
sounds like you
are. While the SCSI HBA is going to cost quite a bit
more than an eSATA
HBA, the external JBODs run about the same, and the
total difference is
going to be $300 or so across the whole setup (which
will cost you $5000
or more fully populated). So the cost to use SCSI vs
eSATA as the host-
attach is a rounding error.
I understand your comments in some ways, in others I do not. It sounds like we're moving backwards in time. Exactly why is SCSI "better" than SAS/SATA for external devices? From my experience (with other OSs/hardware platforms) the opposite is true. A nice SAS/SATA controller with external ports (especially those that allow multiple SAS/SATA drives via one cable - whichever tech you use) works wonderfully for me, and I get a nice thin/clean cable which makes cable management much more "enjoyable" in higher density situations.

I also don't agree with the logic "just spend a mere $300 extra to use older technology!"

$300 may not be much to large business, but things like this nickle and dime small business owners. There's a lot of things I'd prefer to spend $300 on than an expensive SCSI HBA which offers no advantages over a SAS counterpart, in fact offers disadvantages instead.

Your input is of course highly valued, and it's quite possible I'm missing an important piece to the puzzle somewhere here, but I am not convinced this is the ideal solution - simply a "stick with the old stuff, it's easier" solution, which I am very much against.

Thanks,
David


This message posted from opensolaris.org
Jason J. W. Williams
2007-01-22 19:38:23 UTC
Permalink
Hi David,

Depending on the I/O you're doing the X4100/X4200 are much better
suited because of the dual HyperTransport buses. As a storage box with
GigE outputs you've got a lot more I/O capacity with two HT buses than
one. That plus the X4100 is just a more solid box. The X2100 M2 while
a vast improvement over the X2100 in terms of reliability and
features, is still an OEM'd whitebox. We use the X2100 M2s for
application servers, but for anything that needs solid reliability or
I/O we go Galaxy.

Best Regards,
Jason
Post by David J. Orman
Post by Erik Trimble
Not to be picky, but the X2100 and X2200 series are
NOT
designed/targeted for disk serving (they don't even
have redundant power
supplies). They're compute-boxes. The X4100/X4200
are what you are
looking for to get a flexible box more oriented
towards disk i/o and
expansion.
#1 - They have the capacity for redundant PSUs, which is irrelevant to my needs.
#2 - They only have PCI Express slots, and I can't find any good external SATA interface cards on PCI Express
I can't wrap my head around the idea that I should buy a lot more than I need, which still doesn't serve my purposes. The 4 disks in an x4100 still aren't enough, and the machine is a fair amount more costly. I just need mirrored boot drives, and an external disk array.
Post by Erik Trimble
That said (if you're set on an X2200 M2), you are
probably better off
getting a PCI-E SCSI controller, and then attaching
it to an external
SCSI->SATA JBOD. There are plenty of external JBODs
out there which use
Ultra320/Ultra160 as a host interface and SATA as a
drive interface.
Sun will sell you a supported SCSI controller with
the X2200 M2 (the
"Sun StorageTek PCI-E Dual Channel Ultra320 SCSI
HBA").
SCSI is far better for a host attachment mechanism
than eSATA if you
plan on doing more than a couple of drives, which it
sounds like you
are. While the SCSI HBA is going to cost quite a bit
more than an eSATA
HBA, the external JBODs run about the same, and the
total difference is
going to be $300 or so across the whole setup (which
will cost you $5000
or more fully populated). So the cost to use SCSI vs
eSATA as the host-
attach is a rounding error.
I understand your comments in some ways, in others I do not. It sounds like we're moving backwards in time. Exactly why is SCSI "better" than SAS/SATA for external devices? From my experience (with other OSs/hardware platforms) the opposite is true. A nice SAS/SATA controller with external ports (especially those that allow multiple SAS/SATA drives via one cable - whichever tech you use) works wonderfully for me, and I get a nice thin/clean cable which makes cable management much more "enjoyable" in higher density situations.
I also don't agree with the logic "just spend a mere $300 extra to use older technology!"
$300 may not be much to large business, but things like this nickle and dime small business owners. There's a lot of things I'd prefer to spend $300 on than an expensive SCSI HBA which offers no advantages over a SAS counterpart, in fact offers disadvantages instead.
Your input is of course highly valued, and it's quite possible I'm missing an important piece to the puzzle somewhere here, but I am not convinced this is the ideal solution - simply a "stick with the old stuff, it's easier" solution, which I am very much against.
Thanks,
David
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
David J. Orman
2007-01-22 19:50:35 UTC
Permalink
Post by Jason J. W. Williams
Hi David,
Depending on the I/O you're doing the X4100/X4200 are
much better
suited because of the dual HyperTransport buses. As a
storage box with
GigE outputs you've got a lot more I/O capacity with
two HT buses than
one. That plus the X4100 is just a more solid box.
That much makes sense, thanks for clearing that up.
Post by Jason J. W. Williams
The X2100 M2 while
a vast improvement over the X2100 in terms of
reliability and
features, is still an OEM'd whitebox. We use the
X2100 M2s for
application servers, but for anything that needs
solid reliability or
I/O we go Galaxy.
Ahh. That explains a lot. Thank you once again!

Sounds like the X2* is the red-headed stepchild of Sun's product line. They should slap disclaimers up on the product information pages so we know better than to purchase into something that doesn't fully function.

Still unclear on the SAS/SATA solutions, but hopefully that'll progress further now in the thread.

Cheers,
David


This message posted from opensolaris.org
Jason J. W. Williams
2007-01-22 19:57:27 UTC
Permalink
Hi David,

Glad to help! I don't want to bad-mouth the X2100 M2s that much,
because they have been solid. I believe the M2s are made/designed just
for Sun by Quanta Computer (http://www.quanta.com.tw/e_default.htm)
whereas the mobos in the original X2100 was Tyan Tiger with some
slight modifications. That all being said, the problem is that Nvidia
chipset. The MCP55 in the X2100 M2 is an alright chipset, the nForce 4
Pro just had bugs.

Best Regards,
Jason
Post by David J. Orman
Post by Jason J. W. Williams
Hi David,
Depending on the I/O you're doing the X4100/X4200 are
much better
suited because of the dual HyperTransport buses. As a
storage box with
GigE outputs you've got a lot more I/O capacity with
two HT buses than
one. That plus the X4100 is just a more solid box.
That much makes sense, thanks for clearing that up.
Post by Jason J. W. Williams
The X2100 M2 while
a vast improvement over the X2100 in terms of
reliability and
features, is still an OEM'd whitebox. We use the
X2100 M2s for
application servers, but for anything that needs
solid reliability or
I/O we go Galaxy.
Ahh. That explains a lot. Thank you once again!
Sounds like the X2* is the red-headed stepchild of Sun's product line. They should slap disclaimers up on the product information pages so we know better than to purchase into something that doesn't fully function.
Still unclear on the SAS/SATA solutions, but hopefully that'll progress further now in the thread.
Cheers,
David
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
mike
2007-01-22 21:12:13 UTC
Permalink
Areca makes excellent PCI express cards - but probably have zero
support in Solaris/OpenSolaris. I use them in both Windows and Linux.
Works natively in FreeBSD too. They're the fastest cards on the market
I believe still.

However probably not very appropriate for this since it's a Solaris-based OS :(
Post by David J. Orman
#2 - They only have PCI Express slots, and I can't find any good external SATA interface cards on PCI Express
Samuel Hexter
2007-01-23 14:40:16 UTC
Permalink
Post by mike
Areca makes excellent PCI express cards - but probably have zero
support in Solaris/OpenSolaris. I use them in both Windows and Linux.
Works natively in FreeBSD too. They're the fastest cards on the market
I believe still.
However probably not very appropriate for this since it's a Solaris-based OS :(
We've got two Areca ARC-1261ML cards (PCI-E x8, up to 16 SATA disks each) running a 12TB zpool on snv54 and Areca's arcmsr driver. They're a bit of an expensive/over-the-top solution since the cards do hardware RAID-6 and cost roughly $1k each, but we're just using them as JBOD controllers. The hardware RAID-6 capability will be a nice backup if we ever ditch Solaris/ZFS but I can't see that happening any time soon; ZFS is just too good.


This message posted from opensolaris.org
mike
2007-01-23 20:11:44 UTC
Permalink
ooh. they support it? cool. i'll have to explore that option now.
however i still really want eSATA.
Post by Samuel Hexter
We've got two Areca ARC-1261ML cards (PCI-E x8, up to 16 SATA disks each) running a 12TB zpool on snv54 and Areca's arcmsr driver. They're a bit of an expensive/over-the-top solution since the cards do hardware RAID-6 and cost roughly $1k each, but we're just using them as JBOD controllers. The hardware RAID-6 capability will be a nice backup if we ever ditch Solaris/ZFS but I can't see that happening any time soon; ZFS is just too good.
Dan Mick
2007-01-20 06:01:43 UTC
Permalink
Post by David J. Orman
Hi,
I'm looking at Sun's 1U x64 server line, and at most they support two drives. This is fine for the root OS install, but obviously not sufficient for many users.
Specifically, I am looking at the: http://www.sun.com/servers/x64/x2200/ X2200M2.
It only has "Riser card assembly with two internal 64-bit, 8-lane, low-profile, half length PCI-Express slots" for expansion.
What I'm looking for is a SAS/SATA card that would allow me to add an external SATA enclosure (or some such device) to add storage. The supported list on the HCL is pretty slim, and I see no PCI-E stuff. A card that supports SAS would be *ideal*, but I can settle for normal SATA too.
#1 - SAS/SATA PCI-E card that would work with the Sun X2200M2.
Scouting around a bit, I see SIIG has a 3132 chip, for which they make a card,
eSATA II, available in PCIe and PCIe ExpressCard formfactors. I can't promise,
but chances seem good that it's supported by si3124 driver in Solaris:

si3124 "pci1095,3124"
si3124 "pci1095,3132"

Street price for the PCIe card is $30-35.

Also, the first hit for "PCIe eSATA" was a card based on the JMicron JMB 360,
which is supposed to support AHCI, and so should be supported by the brand-new
ahci driver (just back in snv_56). Street prices for the card most popular were
showing as $29.99 quantity 1.

I don't know whether either of these will work, but it looks promising. I also
don't know about eSATA vs. SCSI. Keep in mind that you'll only be able to
support two drives with the SIIG card, and one with the other one; port
multipliers may or may not be working yet.
Post by David J. Orman
#2 - Rack-mountable external enclosure for SAS/SATA drives, supporting hot swap of drives.
Basically, I'm trying to get around using Sun's extremely expensive storage solutions while waiting on them to release something reasonable now that ZFS exists.
Cheers,
David
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Frank Cusack
2007-01-22 19:42:58 UTC
Permalink
Post by Dan Mick
Scouting around a bit, I see SIIG has a 3132 chip, for which they make a
card, eSATA II, available in PCIe and PCIe ExpressCard formfactors. I
can't promise, but chances seem good that it's supported by si3124 driver
si3124 "pci1095,3124"
si3124 "pci1095,3132"
Street price for the PCIe card is $30-35.
Myself, I'd just like to have internal SATA with hot plug support.
(I'm using FC for external storage.) I've only found cards like this:

<http://www.cdw.com/shop/products/default.aspx?EDC=1070554>

which is $57. Could you share where I might find one for $30?

-frank
Dan Mick
2007-01-23 01:39:43 UTC
Permalink
Post by Frank Cusack
Post by Dan Mick
Scouting around a bit, I see SIIG has a 3132 chip, for which they make a
card, eSATA II, available in PCIe and PCIe ExpressCard formfactors. I
can't promise, but chances seem good that it's supported by si3124 driver
si3124 "pci1095,3124"
si3124 "pci1095,3132"
Street price for the PCIe card is $30-35.
Myself, I'd just like to have internal SATA with hot plug support.
<http://www.cdw.com/shop/products/default.aspx?EDC=1070554>
which is $57. Could you share where I might find one for $30?
-frank
I went to Froogle.com and searched for eSataII. That led me to, among others, this:


http://froogle.google.com/froogle_cluster?q=SIIG+eSata+II+PCI&btnG=Search&lmode=online&oid=14674630309093109908

but that shows a PCI card, even though it says PCIe...so there may be some
confusion.

CDW isn't where I'd generally look for low prices, though.
Frank Cusack
2007-01-20 07:23:49 UTC
Permalink
On January 19, 2007 5:59:13 PM -0800 "David J. Orman"
Post by David J. Orman
card that supports SAS would be *ideal*,
Except that SAS support on Solaris is not very good.

One major problem is they treat it like scsi when instead they should
treat it like FC (or native SATA).
Post by David J. Orman
# 1 - SAS/SATA PCI-E card that would work with the Sun X2200M2.
I had the lsilogic 3442-E working on x86 but not reliably. That is
the only SAS controller Sun supports AFAIK.
Post by David J. Orman
# 2 - Rack-mountable external enclosure for SAS/SATA drives, supporting
# hot swap of drives.
promise vtrak j300s is the cheapest one i've found. adaptec's been
advertising one forever (6+ months?) but it's not in production, at
least you won't be able to find one without hard drives, and you
won't be able to find the dual controller model.
Post by David J. Orman
Basically, I'm trying to get around using Sun's extremely expensive
storage solutions while waiting on them to release something reasonable
now that ZFS exists.
thumper (x4500) seems pretty reasonable ($/GB).

-frank
Shannon Roddy
2007-01-20 08:16:45 UTC
Permalink
Post by Frank Cusack
thumper (x4500) seems pretty reasonable ($/GB).
-frank
I am always amazed that people consider thumper to be reasonable in
price. 450% or more markup per drive from street price in July 2006
numbers doesn't seem reasonable to me, even after subtracting the cost
of the system. I like the x4500, I wish I had one. But, I can't pay
what Sun wants for it. So, instead, I am stuck buying lower end Sun
systems and buying third party SCSI/SATA JBODs. I like Sun. I like
their products, but I can't understand their storage pricing most of the
time.

-Shannon
Frank Cusack
2007-01-20 08:31:42 UTC
Permalink
On January 20, 2007 2:16:45 AM -0600 Shannon Roddy
Post by Shannon Roddy
Post by Frank Cusack
thumper (x4500) seems pretty reasonable ($/GB).
-frank
I am always amazed that people consider thumper to be reasonable in
price. 450% or more markup per drive from street price in July 2006
numbers doesn't seem reasonable to me, even after subtracting the cost
of the system. I like the x4500, I wish I had one. But, I can't pay
what Sun wants for it. So, instead, I am stuck buying lower end Sun
systems and buying third party SCSI/SATA JBODs.
But what data throughput do you get? Thumper is phenomenal.

It is ashame (for the consumer) that it's not available without drives.
Sun has always had an obscene markup on drives.

-frank
Shannon Roddy
2007-01-20 09:02:01 UTC
Permalink
Post by Frank Cusack
It is ashame (for the consumer) that it's not available without drives.
Sun has always had an obscene markup on drives.
-frank
To me, hard drives today are as much a commodity item as network cable,
GBICs, NICs, DVD drives, etc. Sun should not be marking them up at the
rate that they do. I would be happy to buy a Thumper at whatever
engineering cost they have calculated in for the system without the
drives. For sun to charge 4-8 times street price for hard drives that
they order just the same as I do from the same manufacturers that I
order from is infuriating. It doesn't make that much difference on a
two drive x2100, but when you are talking about 48 drives in a thumper,
it makes paying that markup just insane. I still buy my x2100s without
drives because of the same reason though. My local Sun service guy out
here hears this from me all the time, and it is probably the only
complaint I really have about Sun. I pay ~$1k/TB right now for my ZFS
JBOD storage. It is mostly just bulk storage (user home directories &
the like) and does not require huge bandwidth. So, when Jonathan
Schwartz decides to sell them without drives, maybe I'll buy a few just
to have a nicely engineered system instead of my cabling mess currently
in the racks.

-Shannon
Anton B. Rang
2007-01-20 14:45:21 UTC
Permalink
Post by Shannon Roddy
To me, hard drives today are as much a commodity item as network cable,
GBICs, NICs, DVD drives, etc.
They are and they aren't. Reliability, particularly in high-heat & vibration environments, can vary quite a bit.
Post by Shannon Roddy
For sun to charge 4-8 times street price for hard drives that they order just the same
as I do from the same manufacturers that I order from is infuriating.
I won't argue with that, I remember when all the vendors were doing that. Maybe they still are, at least the ones who still sell drives. :-)

But in the particular case of a Thumper, I think Sun is doing the right thing by selling only qualified drives. That is a very dense case. Not every drive with the right form factor will work reliably in it. Even drives which work in another dense case may not work reliably because the heat & vibration profile is different.

That's a separate issue from the price charged for the drives; but I'd be very hesitant to sell and support a system without drives if I knew that only certain drives would work without "cooking" or excessive seek errors.


This message posted from opensolaris.org
Ed Gould
2007-01-20 18:12:29 UTC
Permalink
Post by Shannon Roddy
For sun to charge 4-8 times street price for hard drives that
they order just the same as I do from the same manufacturers that I
order from is infuriating.
Are you sure they're really the same drives? Mechanically, they
probably are, but last I knew (I don't work in the Storage part of Sun,
so I have no particular knowledge about current practices), Sun and
other systems vendors (I know both Apple and DEC did) had custom
firmware in the drives they resell. One reason for this is that the
systems vendors qualified the drives with a particular firmware load,
and did not buy just the latest firmware that the drive manufacturer
wanted to ship, for quality-control reasons. At least some of the time,
there were custom functionality changes as well.
--
--Ed
Jason J. W. Williams
2007-01-20 18:49:11 UTC
Permalink
Hi Shannon,

The markup is still pretty high on a per-drive basis. That being said,
$1-2/GB is darn low for the capacity in a server. Plus, you're also
paying for having enough HyperTransport I/O to feed the PCI-E I/O.

Does anyone know what problems they had with the 250GB version of the
Thumper that caused them to pull it?

Best Regards,
Jason
Post by Shannon Roddy
Post by Frank Cusack
thumper (x4500) seems pretty reasonable ($/GB).
-frank
I am always amazed that people consider thumper to be reasonable in
price. 450% or more markup per drive from street price in July 2006
numbers doesn't seem reasonable to me, even after subtracting the cost
of the system. I like the x4500, I wish I had one. But, I can't pay
what Sun wants for it. So, instead, I am stuck buying lower end Sun
systems and buying third party SCSI/SATA JBODs. I like Sun. I like
their products, but I can't understand their storage pricing most of the
time.
-Shannon
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Richard Elling
2007-01-21 02:08:07 UTC
Permalink
Post by Frank Cusack
On January 19, 2007 5:59:13 PM -0800 "David J. Orman"
Post by David J. Orman
card that supports SAS would be *ideal*,
Except that SAS support on Solaris is not very good.
One major problem is they treat it like scsi when instead they should
treat it like FC (or native SATA).
uhmm... SAS is serial attached SCSI, why wouldn't we treat it like SCSI?

BTW, the sd driver and ssd (SCSI over fibre channel) drivers have the same
source. SATA will also use the sd driver, as Pawel describes in his blogs
on the SATA framework at http://blogs.sun.com/pawelblog

What I gather from this is that today, SATA drives will either look like IDE
drives or SCSI drives, to some extent. When they look like IDE drives, you
don't get all of the cfgadm or luxadm management options and you have to do
thinks like hot plug in a more-rather-than-less manual mode. When they look
like SCSI drives, then you'll also get the more-automatic hot plug features.
-- richard
C***@Sun.COM
2007-01-21 10:37:22 UTC
Permalink
Post by Richard Elling
What I gather from this is that today, SATA drives will either look like IDE
drives or SCSI drives, to some extent. When they look like IDE drives, you
don't get all of the cfgadm or luxadm management options and you have to do
thinks like hot plug in a more-rather-than-less manual mode. When they look
like SCSI drives, then you'll also get the more-automatic hot plug features.
In the one case they're running the controller in compatibility mode; for
the other case you'll need to appropriate SATA controller driver.

Casper
Frank Cusack
2007-01-22 16:47:22 UTC
Permalink
On January 20, 2007 6:08:07 PM -0800 Richard Elling
Post by Richard Elling
Post by Frank Cusack
On January 19, 2007 5:59:13 PM -0800 "David J. Orman"
Post by David J. Orman
card that supports SAS would be *ideal*,
Except that SAS support on Solaris is not very good.
One major problem is they treat it like scsi when instead they should
treat it like FC (or native SATA).
uhmm... SAS is serial attached SCSI, why wouldn't we treat it like SCSI?
On January 21, 2007 8:17:10 PM +1100 "James C. McPherson"
Post by Richard Elling
Uh ... you do know that the second "S" in SAS stands for
"serial-attached SCSI", right?
Uh ... you do know that the SCSI part of SAS refers to the command
set, right? And not the physical topology and associated things.
(Please forgive any terminology errors, you know what I mean.)

That seems like saying, "Uh ... you do know that there is no SCSI in FC,
right?" (Yet FC is still SCSI.)
Post by Richard Elling
Would you please expand upon this, because I'm really interested
in what your thoughts are..... since I work on Sun's SAS driver :)
SAS is limited, by the Solaris driver, to 16 devices. Not even that,
it's limited to devices with SCSI id's 0-15, so if you have 16 drives
and they start at id 10, well you only get access to 6 of them.

But SAS doesn't even really have scsi target id's. It has WWN-like
identifiers. I guess HBAs do some kind of mapping but it's not
reliable and can change, and inspecting or hardcoding device->id
mappings requires changing settings in the card's BIOS/OF.

Also, the HBA may renumber devices. That can be a big problem.

It would be better to use the SASAddress the way the fibre channel
drivers use the WWN. Drives could still be mapped to scsi id's, but
it should be done by the Solaris driver, not the HBA. And when
multipathing the names should change like with FC.

That's one thing. The other is unreliability with many devices
attached. I've talked to others that have had this problem as well.
I offered to send my controller(s) and JBOD to Sun for testing, through
the support channel (I had a bug open on this for awhile), but they
didn't want it. I think it came down to the classic "we don't
sell that hardware" problem. The onboard SAS controllers (x4100, v215
etc) work fine due to the limited topology. I wonder how you fix
(hardcode) the scsi id's with those. Because you're not doing it
with a PCI card.

-frank
James C. McPherson
2007-01-22 21:53:30 UTC
Permalink
Hi Frank,
Post by Frank Cusack
On January 20, 2007 6:08:07 PM -0800 Richard Elling
Post by Richard Elling
Post by Frank Cusack
On January 19, 2007 5:59:13 PM -0800 "David J. Orman"
Post by David J. Orman
card that supports SAS would be *ideal*,
Except that SAS support on Solaris is not very good.
One major problem is they treat it like scsi when instead they should
treat it like FC (or native SATA).
uhmm... SAS is serial attached SCSI, why wouldn't we treat it like SCSI?
On January 21, 2007 8:17:10 PM +1100 "James C. McPherson"
Post by Richard Elling
Uh ... you do know that the second "S" in SAS stands for
"serial-attached SCSI", right?
Uh ... you do know that the SCSI part of SAS refers to the command
set, right? And not the physical topology and associated things.
(Please forgive any terminology errors, you know what I mean.)
That seems like saying, "Uh ... you do know that there is no SCSI in FC,
right?" (Yet FC is still SCSI.)
Sorry, I should have been more specific there. I was responding
on your "(or native SATA)" comment.
Post by Frank Cusack
Post by Richard Elling
Would you please expand upon this, because I'm really interested
in what your thoughts are..... since I work on Sun's SAS driver :)
SAS is limited, by the Solaris driver, to 16 devices.
Correct.
Post by Frank Cusack
Not even that,
it's limited to devices with SCSI id's 0-15, so if you have 16 drives
and they start at id 10, well you only get access to 6 of them.
Why would you start your numbering at 10?
Post by Frank Cusack
But SAS doesn't even really have scsi target id's. It has WWN-like
identifiers. I guess HBAs do some kind of mapping but it's not
reliable and can change, and inspecting or hardcoding device->id
mappings requires changing settings in the card's BIOS/OF.
SAS has WWNs because that is what the standard requires. SAS hba
implementors are free to map WWNs to relatively user-friendly
identifiers, which is what the LSI SAS1064/SAS1064/E chips do.
Post by Frank Cusack
Also, the HBA may renumber devices. That can be a big problem.
Agreed. No argument there!
Post by Frank Cusack
It would be better to use the SASAddress the way the fibre channel
drivers use the WWN. Drives could still be mapped to scsi id's, but
it should be done by the Solaris driver, not the HBA. And when
multipathing the names should change like with FC.
That too is my preference. We're currently working on multipathing
with SAS.
Post by Frank Cusack
That's one thing. The other is unreliability with many devices
attached. I've talked to others that have had this problem as well.
I offered to send my controller(s) and JBOD to Sun for testing, through
the support channel (I had a bug open on this for awhile), but they
didn't want it. I think it came down to the classic "we don't
sell that hardware" problem. The onboard SAS controllers (x4100, v215
etc) work fine due to the limited topology. I wonder how you fix
(hardcode) the scsi id's with those. Because you're not doing it
with a PCI card.
With a physically limited topology numbering isn't an issue because
of the way that the ports are connected to the onboard devices. It's
external devices (requiring a plugin hba) where it's potentially a
problem. Of course, to fully exploit that situation you'd need to
have 64K addressable targets attached to a single controller, and
that hasn't happened yet. So we do have a window of opportunity :)


best regards,
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
Frank Cusack
2007-01-23 04:17:38 UTC
Permalink
On January 23, 2007 8:53:30 AM +1100 "James C. McPherson"
Post by Jason J. W. Williams
Hi Frank,
Post by Frank Cusack
Post by James C. McPherson
Would you please expand upon this, because I'm really interested
in what your thoughts are..... since I work on Sun's SAS driver :)
SAS is limited, by the Solaris driver, to 16 devices.
Correct.
Post by Frank Cusack
Not even that,
it's limited to devices with SCSI id's 0-15, so if you have 16 drives
and they start at id 10, well you only get access to 6 of them.
Why would you start your numbering at 10?
Because you don't have a choice. It is up to the HBA and getting it
to do the right thing (ie, what you want) isn't always easy. IIRC,
the LSI Logic HBA(s) I had would automatically remember SASAddress to
SCSI ID mappings. So if you had attached 16 drives, removed one
and replaced it with a different one (even in a JBOD, ie it would
be attached to the same PHY), it would be id 16, because the first 16
scsi id's (0-15) were already accounted for. And then the new drive,
lets call it a replacement for a failed drive, would be unaccessible
under Solaris.

Why it would ever start at something other than 0, I'm not sure. I
also kind of remember that scsi.conf had some setting to map the HBA
to target 7 (which doesn't apply to SAS! yet the reference there was
specifically for LSI 1068. again IIRC). I think that I was seeing
that drives started at 8 because of this initialization, and that
removing it allowed the drives to start at 0 -- once I reset the HBA
BIOS to forget the mappings it had already made.
Post by Jason J. W. Williams
Post by Frank Cusack
But SAS doesn't even really have scsi target id's. It has WWN-like
identifiers. I guess HBAs do some kind of mapping but it's not
reliable and can change, and inspecting or hardcoding device->id
mappings requires changing settings in the card's BIOS/OF.
SAS has WWNs because that is what the standard requires. SAS hba
implementors are free to map WWNs to relatively user-friendly
identifiers, which is what the LSI SAS1064/SAS1064/E chips do.
Post by Frank Cusack
Also, the HBA may renumber devices. That can be a big problem.
Agreed. No argument there!
Post by Frank Cusack
It would be better to use the SASAddress the way the fibre channel
drivers use the WWN. Drives could still be mapped to scsi id's, but
it should be done by the Solaris driver, not the HBA. And when
multipathing the names should change like with FC.
That too is my preference. We're currently working on multipathing
with SAS.
That is good to hear.
Post by Jason J. W. Williams
Post by Frank Cusack
That's one thing. The other is unreliability with many devices
attached. I've talked to others that have had this problem as well.
I offered to send my controller(s) and JBOD to Sun for testing, through
the support channel (I had a bug open on this for awhile), but they
didn't want it. I think it came down to the classic "we don't
sell that hardware" problem. The onboard SAS controllers (x4100, v215
etc) work fine due to the limited topology. I wonder how you fix
(hardcode) the scsi id's with those. Because you're not doing it
with a PCI card.
With a physically limited topology numbering isn't an issue because
of the way that the ports are connected to the onboard devices. It's
external devices (requiring a plugin hba) where it's potentially a
problem. Of course, to fully exploit that situation you'd need to
have 64K addressable targets attached to a single controller, and
that hasn't happened yet. So we do have a window of opportunity :)
I believe SAS supports a maximum of 128 devices per controller, including
multipliers.

-frank
James C. McPherson
2007-01-23 04:38:32 UTC
Permalink
Post by Frank Cusack
On January 23, 2007 8:53:30 AM +1100 "James C. McPherson"
...
Post by Frank Cusack
Post by James C. McPherson
Why would you start your numbering at 10?
Because you don't have a choice. It is up to the HBA and getting it
to do the right thing (ie, what you want) isn't always easy. IIRC,
the LSI Logic HBA(s) I had would automatically remember SASAddress to
SCSI ID mappings. So if you had attached 16 drives, removed one
and replaced it with a different one (even in a JBOD, ie it would
be attached to the same PHY), it would be id 16, because the first 16
scsi id's (0-15) were already accounted for. And then the new drive,
lets call it a replacement for a failed drive, would be unaccessible
under Solaris.
Oh heck. That sounds like one helluva broken way of doing things.
Post by Frank Cusack
Why it would ever start at something other than 0, I'm not sure. I
also kind of remember that scsi.conf had some setting to map the HBA
to target 7 (which doesn't apply to SAS! yet the reference there was
specifically for LSI 1068. again IIRC). I think that I was seeing
that drives started at 8 because of this initialization, and that
removing it allowed the drives to start at 0 -- once I reset the HBA
BIOS to forget the mappings it had already made.
/me groans ... more brokenness. I'll pass this onto some others in
our team who've been working on a similar issue.

...
Post by Frank Cusack
Post by James C. McPherson
With a physically limited topology numbering isn't an issue because
of the way that the ports are connected to the onboard devices. It's
external devices (requiring a plugin hba) where it's potentially a
problem. Of course, to fully exploit that situation you'd need to
have 64K addressable targets attached to a single controller, and
that hasn't happened yet. So we do have a window of opportunity :)
I believe SAS supports a maximum of 128 devices per controller, including
multipliers.
Not quite correct - each expander device can have 128 connections,
up to a max of 16256 devices in a single SAS domain. My figure of
64K addressable targets makes an assumption about the number of
SAS domains that a controller can have :)

Even so, we've still got that window.


cheers,
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
Frank Cusack
2007-01-23 04:47:21 UTC
Permalink
On January 23, 2007 3:38:32 PM +1100 "James C. McPherson"
Post by James C. McPherson
/me groans ... more brokenness. I'll pass this onto some others in
our team who've been working on a similar issue.
Cool. I really hope Solaris gets good SAS support, it's a great
technology and a good complement to SATA.

But I have my doubts about how useful it is in the short term. AFAIK
only Adaptec and LSI Logic are making controllers today. With so few
manufacturers it's a scary investment. (Of course, someone please
correct me if you know of other players.)

-frank
David J. Orman
2007-01-23 18:33:20 UTC
Permalink
*snip snip*
Post by Frank Cusack
AFAIK
only Adaptec and LSI Logic are making controllers
today. With so few
manufacturers it's a scary investment. (Of course,
someone please
correct me if you know of other players.)
There's a few others. Those are (of course) the major players (and with big names like that making them, you can be pretty sure they are going to be around for a while...)

That said, I know of ARIO Data ( http://www.ariodata.com/products/controllers/ ) making some (or ramping up to make them.) I'm sure there are some others. It's certainly not as common as SATA/SCSI/etc right now, up until recently you couldn't even buy drives. Now, the fastest drive I've seen is SAS only (15k 2.5" Seagate). I'm pretty sure when Seagate is making it's fastest product SAS, SAS has been accepted. :p

http://techreport.com/onearticle.x/11638


This message posted from opensolaris.org
James C. McPherson
2007-01-21 09:17:10 UTC
Permalink
Post by Frank Cusack
On January 19, 2007 5:59:13 PM -0800 "David J. Orman"
Post by David J. Orman
card that supports SAS would be *ideal*,
Except that SAS support on Solaris is not very good.
One major problem is they treat it like scsi when instead they should
treat it like FC (or native SATA).
Uh ... you do know that the second "S" in SAS stands for
"serial-attached SCSI", right? Native SATA is a subset
of native SAS, too. What I'm intrigued by is your assertion
that we should treat SAS the same way we treat FC.

Would you please expand upon this, because I'm really interested
in what your thoughts are..... since I work on Sun's SAS driver :)

I would also like to get some feedback on what you and others
would like to see for Sun's SAS support. Not guaranteeing
anything, but I'm happy to act as a channel to the relevant
people who have signoff on things like this.



cheers,
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
Al Hopper
2007-01-21 13:38:01 UTC
Permalink
On Sun, 21 Jan 2007, James C. McPherson wrote:

... snip ....
Post by James C. McPherson
Would you please expand upon this, because I'm really interested
in what your thoughts are..... since I work on Sun's SAS driver :)
Hi James - just the man I have a couple of questions for... :)

Will the LsiLogic 3041E-R (4-port internal, SAS/SATA, PCI-e) HBA work as a
generic ZFS/JBOD SATA controller?

There are a few white-box hackers on this list looking for a
solid/reliable SATA HBA with a PCI-e (PCI Express) connector - rather than
the rock-solid Supermicro/Marvel board which is only available with a
64-bit PCI-X connector at the moment.

Thanks,

Al Hopper Logical Approach Inc, Plano, TX. ***@logical-approach.com
Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
OpenSolaris Governing Board (OGB) Member - Feb 2006
James C. McPherson
2007-01-21 22:43:09 UTC
Permalink
Post by Al Hopper
... snip ....
Post by James C. McPherson
Would you please expand upon this, because I'm really interested
in what your thoughts are..... since I work on Sun's SAS driver :)
Hi James - just the man I have a couple of questions for... :)
Will the LsiLogic 3041E-R (4-port internal, SAS/SATA, PCI-e) HBA work as a
generic ZFS/JBOD SATA controller?
There are a few white-box hackers on this list looking for a
solid/reliable SATA HBA with a PCI-e (PCI Express) connector - rather than
the rock-solid Supermicro/Marvel board which is only available with a
64-bit PCI-X connector at the moment.
Hi Al,
according to the 3041E-R 2page pdf which I found at
http://www.lsi.com/documentation/storage/scg/hbas/sas/lsisas3041e-r_pb.pdf
the SAS asic is the LSISAS1064E which to the best of my
knowledge, is supported with the mpt driver.

So the answer to your question is "I don't see why not" :)


That chip is also the onboard controller with the T1000,
T2000, Ultra25 and Ultra45.

cheers,
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
Frank Cusack
2007-01-22 16:15:46 UTC
Permalink
Post by Al Hopper
... snip ....
Post by James C. McPherson
Would you please expand upon this, because I'm really interested
in what your thoughts are..... since I work on Sun's SAS driver :)
Hi James - just the man I have a couple of questions for... :)
Will the LsiLogic 3041E-R (4-port internal, SAS/SATA, PCI-e) HBA work as a
generic ZFS/JBOD SATA controller?
It does (I've used it). Kind of.

When I've had it attached to an external JBOD, it works fine with only
1 or 2 drives, but when the JBOD (promise j300s) is fully populated
with 12 drives, it flakes out (I/O errors). Windows had no problems.

It works better with the LSI drivers than the Sun mpt driver.

Sorry I don't remember many more details than that. You can search
on comp.unix.solaris for a thread a few months ago about it.

It only works on x86.

I ended up selling it and the JBOD, just couldn't get it working reliably.

-frank
Frank Cusack
2007-01-22 16:44:39 UTC
Permalink
Post by Frank Cusack
Post by Al Hopper
... snip ....
Post by James C. McPherson
Would you please expand upon this, because I'm really interested
in what your thoughts are..... since I work on Sun's SAS driver :)
Hi James - just the man I have a couple of questions for... :)
Will the LsiLogic 3041E-R (4-port internal, SAS/SATA, PCI-e) HBA work as
a generic ZFS/JBOD SATA controller?
It does (I've used it). Kind of.
eh, sorry, i had a 3042E-R. I think that was the model #. Same thing
though, just with 2 external and 2 internal ports instead of 4 internal.
I also had the PCI-X version and had the same issues.
Post by Frank Cusack
When I've had it attached to an external JBOD, it works fine with only
1 or 2 drives, but when the JBOD (promise j300s) is fully populated
with 12 drives, it flakes out (I/O errors). Windows had no problems.
It works better with the LSI drivers than the Sun mpt driver.
Sorry I don't remember many more details than that. You can search
on comp.unix.solaris for a thread a few months ago about it.
It only works on x86.
I ended up selling it and the JBOD, just couldn't get it working reliably.
-frank
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Marion Hakanson
2007-01-20 21:18:13 UTC
Permalink
Post by David J. Orman
I was talking about the huge gap in storage solutions from Sun for the
middle-ground. While $24,000 is a wonderful deal, it's absolute overkill for
what I'm thinking about doing. I was looking for more around 6-8 drives.
How about a Sun V40z? It's available with up to 6 drives (300GB ea),
and a low-end configuration (cpu/ram-wise) might not be out of your
price range, depending on your discount. There are plenty of slots
if you want to later add external enclosures, too.

Of course, Dell probably has cheaper 64-bit systems with 6 internal
drives available too.

Regards,

Marion
Elm, Rob
2007-01-22 19:32:58 UTC
Permalink
For the most part, all SATA devices are hotswapable... If there is some
argument about the capability to hotswap in the x2200 M2, it's a limitation
of the OS/Drivers and not the hardware.

Sincerely,

Rob Elm
System Analyst

Clark Consulting
3600 American Boulevard West
Bloomington, MN 55431
***@clarkconsulting.com
www.clarkconsulting.com
NYSE: CLK


________________________________

This is a transmission from Clark Consulting, and may contain information
that is privileged and confidential. Clark Consulting assumes no
responsibility for damages resulting from unauthorized access, disclosure or
tampering, which could have occurred during transmission. If you have
received this transmission in error, please destroy it and notify Clark
Consulting immediately at 847-304-5800.

-----Original Message-----
From: zfs-discuss-***@opensolaris.org
[mailto:zfs-discuss-***@opensolaris.org] On Behalf Of David J. Orman
Sent: Monday, January 22, 2007 1:20 PM
To: zfs-***@opensolaris.org
Subject: [zfs-discuss] Re: Re: External drive enclosures + Sun Server for
Post by Jason J. W. Williams
Hi Frank,
I'm sure Richard will check it out. He's a very good guy and not
trying to jerk you around. I'm sure the hostility isn't warranted. :-)
Best Regards,
Jason
I'm very confused now. Do the x2200m2s support "hot plug" of drives or not?
I can't believe it's that confusing/difficult. They do or they don't. I
don't care if I can just yank a drive in a running system out and have no
problems, but I *do* need to be able to swap a failed disk in a mirror
without downtime.

Does Sun not have an official word on this? I'm losing my faith very rapidly
on the lack of an absolute response to this question.

Along these same lines, what is the roadmap for ZFS on boot disks? I've not
heard anything about it in quite some time, and google doesn't yield any
current information either.


This message posted from opensolaris.org
Richard Elling
2007-01-22 22:12:24 UTC
Permalink
Post by Elm, Rob
For the most part, all SATA devices are hotswapable... If there is some
argument about the capability to hotswap in the x2200 M2, it's a limitation
of the OS/Drivers and not the hardware.
s/hotswappable/hot pluggable/g
sigh.
-- richard
Loading...