Discussion:
LSI SAS2008 mps driver preferred firmware version
(too old to reply)
Kai Gallasch
2015-11-12 21:05:13 UTC
Permalink
Hi.

I'm currently building a new ZFS based FreeBSD 10.2 server with a
SAS/SATA HBA SAS9211-8i.

Is there a preferred or recommended firmware version for Fusion-MPT
SAS-2 2008 chipset based LSI cards like the SAS9211-8i? MPS(4) does not
give any information about this.

The current version of my SAS9211-8i is:
v7.05.05.00 (2010.05.19), BIOS
5.00.17.00-IR, FW


IR vs. IT firmware:

Are there any advantages replacing the -IR (integrated raid) firmware on
the LSI controller with an -IT (target mode) version, if the RAID
functionality of the HBA is not used at all?

There were some claims that running the -IR version in a ZFS JBOD setup
would result in a small performance penalty compared to -IT and that
there was a risk that a controller running the -IR firmware version
could potentially damage ZFS data on a disk by putting RAID metadata
somewhere on the drive, even if not using the RAID feature of the card!

I'd appreciate it if someone could shed some light on this.

Regards,
Kai.
--
PGP-KeyID = 0x70654D7C4FB1F588
One day a lemming will fly..
Royce Williams
2015-11-12 22:20:38 UTC
Permalink
Firmware should match driver, e.g.:

mps0: Firmware: 19.00.00.00, Driver: 19.00.00.00-fbs


Some of this may help -- not yet updated for 10.2, but may still be useful:

http://roycebits.blogspot.com/2015/01/freebsd-lsi-sas9211-8i-hba-firmware.html

Royce
Post by Kai Gallasch
Hi.
I'm currently building a new ZFS based FreeBSD 10.2 server with a
SAS/SATA HBA SAS9211-8i.
Is there a preferred or recommended firmware version for Fusion-MPT
SAS-2 2008 chipset based LSI cards like the SAS9211-8i? MPS(4) does not
give any information about this.
v7.05.05.00 (2010.05.19), BIOS
5.00.17.00-IR, FW
Are there any advantages replacing the -IR (integrated raid) firmware on
the LSI controller with an -IT (target mode) version, if the RAID
functionality of the HBA is not used at all?
There were some claims that running the -IR version in a ZFS JBOD setup
would result in a small performance penalty compared to -IT and that
there was a risk that a controller running the -IR firmware version
could potentially damage ZFS data on a disk by putting RAID metadata
somewhere on the drive, even if not using the RAID feature of the card!
I'd appreciate it if someone could shed some light on this.
Regards,
Kai.
--
PGP-KeyID = 0x70654D7C4FB1F588
One day a lemming will fly..
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Stephen Mcconnell via freebsd-scsi
2015-11-12 22:44:43 UTC
Permalink
-----Original Message-----
Sent: Thursday, November 12, 2015 3:21 PM
To: Kai Gallasch
Subject: Re: LSI SAS2008 mps driver preferred firmware version
mps0: Firmware: 19.00.00.00, Driver: 19.00.00.00-fbs
I've never heard of any problems when these are mismatched, so I'm not
sure why FreeNAS would complain. Anyway, you should use the latest of
both in my opinion.
The latest FW on the avagotech website is 20.00.04.00. I have heard that
some FreeBSD users have had some problems with the PH19 FW.

Steve McConnell
http://roycebits.blogspot.com/2015/01/freebsd-lsi-sas9211-8i-hba-
firmware.html
Royce
Post by Kai Gallasch
Hi.
I'm currently building a new ZFS based FreeBSD 10.2 server with a
SAS/SATA HBA SAS9211-8i.
Is there a preferred or recommended firmware version for Fusion-MPT
SAS-2 2008 chipset based LSI cards like the SAS9211-8i? MPS(4) does
not give any information about this.
v7.05.05.00 (2010.05.19), BIOS
5.00.17.00-IR, FW
Are there any advantages replacing the -IR (integrated raid) firmware
on the LSI controller with an -IT (target mode) version, if the RAID
functionality of the HBA is not used at all?
There were some claims that running the -IR version in a ZFS JBOD
setup would result in a small performance penalty compared to -IT and
that there was a risk that a controller running the -IR firmware
version could potentially damage ZFS data on a disk by putting RAID
metadata somewhere on the drive, even if not using the RAID feature of
the
card!
Post by Kai Gallasch
I'd appreciate it if someone could shed some light on this.
Regards,
Kai.
--
PGP-KeyID = 0x70654D7C4FB1F588
One day a lemming will fly..
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Stephen Mcconnell via freebsd-scsi
2015-11-12 23:27:39 UTC
Permalink
-----Original Message-----
Sent: Thursday, November 12, 2015 3:45 PM
To: 'Royce Williams'; 'Kai Gallasch'
Subject: RE: LSI SAS2008 mps driver preferred firmware version
-----Original Message-----
Sent: Thursday, November 12, 2015 3:21 PM
To: Kai Gallasch
Subject: Re: LSI SAS2008 mps driver preferred firmware version
mps0: Firmware: 19.00.00.00, Driver: 19.00.00.00-fbs
I've never heard of any problems when these are mismatched, so I'm not
sure
why FreeNAS would complain. Anyway, you should use the latest of both
in
my opinion.
The latest FW on the avagotech website is 20.00.04.00. I have heard that
some FreeBSD users have had some problems with the PH19 FW.
Steve McConnell
http://roycebits.blogspot.com/2015/01/freebsd-lsi-sas9211-8i-hba-
firmware.html
Royce
Post by Kai Gallasch
Hi.
I'm currently building a new ZFS based FreeBSD 10.2 server with a
SAS/SATA HBA SAS9211-8i.
Is there a preferred or recommended firmware version for Fusion-MPT
SAS-2 2008 chipset based LSI cards like the SAS9211-8i? MPS(4) does
not give any information about this.
v7.05.05.00 (2010.05.19), BIOS
5.00.17.00-IR, FW
Are there any advantages replacing the -IR (integrated raid)
firmware on the LSI controller with an -IT (target mode) version, if
the RAID functionality of the HBA is not used at all?
There were some claims that running the -IR version in a ZFS JBOD
setup would result in a small performance penalty compared to -IT
and that there was a risk that a controller running the -IR firmware
version could potentially damage ZFS data on a disk by putting RAID
metadata somewhere on the drive, even if not using the RAID feature
of the
card!
And also, I asked someone who works on the FW about these IR concerns and
he says the only reason for a performance issue is that the IR FW is a bit
larger and therefore the command queue depth will be smaller due to the
amount of resources available, so it is possible to have a slight
performance degradation in some cases. Other than that, once it is
determined that there are no IR drives the FW acts just like IT. AND there
is no data corruption issue for ZFS disks. If there is, that would be bad
and a high priority defect would need to be filed :) If there are no IR
volumes, the FW works just like IT so there would be no reason to write
metadata to a non-IR disk. Even if there was a separate IR volume, the
ZFS disk would not be written with metadata because it's not part of an IR
volume.

Steve
Post by Kai Gallasch
I'd appreciate it if someone could shed some light on this.
Regards,
Kai.
--
PGP-KeyID = 0x70654D7C4FB1F588
One day a lemming will fly..
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to
"freebsd-scsi-***@freebsd.org"
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Kai Gallasch
2015-11-14 12:18:14 UTC
Permalink
Post by Royce Williams
mps0: Firmware: 19.00.00.00, Driver: 19.00.00.00-fbs
http://roycebits.blogspot.com/2015/01/freebsd-lsi-sas9211-8i-hba-firmware.html
Thanks! Lots of information about reflashing the 9211-8i.
So I upgraded the old firmare of the controller from

mps0: Firmware: 05.00.17.00, Driver: 20.00.00.00-fbsd
to mps0: Firmware: 20.00.04.00, Driver: 20.00.00.00-fbsd
(FreeBSD 10.2)

As I understand it the firmware 20.00.00.00 was pulled by avago and
replaced with the fixed version 20.00.04.00

I will give feedback if I notice any problems with this FW version.

As a side note: Flashing the 9211-8i to the new firmware version changed
the way FreeBSD orders the disk devices on this server:

With the old firmware it looked like this:

root@:~ # camcontrol devlist
<HITACHI HUS156030VLS600 A760> at scbus0 target 10 lun 0 (pass0,da0)
<HITACHI HUS156030VLS600 A5D0> at scbus0 target 11 lun 0 (pass1,da1)
<ATA INTEL SSDSC2BA10 0270> at scbus0 target 12 lun 0 (pass2,da2)
<ATA INTEL SSDSC2BA10 0270> at scbus0 target 13 lun 0 (pass3,da3)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 14 lun 0 (pass4,da4)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 15 lun 0 (pass5,da5)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 16 lun 0 (pass6,da6)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 17 lun 0 (pass7,da7)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 18 lun 0 (pass8,da8)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 19 lun 0 (pass9,da9)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 20 lun 0 (pass10,da10)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 21 lun 0 (pass11,da11)
<SUN HYDE12 0341> at scbus0 target 22 lun 0 (pass12,ses0)
<AHCI SGPIO Enclosure 1.00 0001> at scbus7 target 0 lun 0 (pass13,ses1)

The order is according to the order the disks are placed in the drive
bays: (da0, bay1; da1, bay2, ..)


With the new firmware it now looks like this:

<WD WD2001FYYG-01SL3 VR08> at scbus0 target 8 lun 0 (pass0,da0)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 9 lun 0 (pass1,da1)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 10 lun 0 (pass2,da2)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 11 lun 0 (pass3,da3)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 12 lun 0 (pass4,da4)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 13 lun 0 (pass5,da5)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 14 lun 0 (pass6,da6)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 15 lun 0 (pass7,da7)
<ATA INTEL SSDSC2BA10 0270> at scbus0 target 16 lun 0 (pass8,da8)
<ATA INTEL SSDSC2BA10 0270> at scbus0 target 17 lun 0 (pass9,da9)
<HITACHI HUS156030VLS600 A5D0> at scbus0 target 18 lun 0 (pass10,da10)
<HITACHI HUS156030VLS600 A760> at scbus0 target 19 lun 0 (pass11,da11)
<SUN HYDE12 0341> at scbus0 target 20 lun 0 (pass12,ses0)
<AHCI SGPIO Enclosure 1.00 0001> at scbus7 target 0 lun 0 (pass13,ses1)

So now the drive stuck in the last drive bay is seen as da0 and the
drive in the first drive bay as da11

But: In the controller BIOS the scan order of the drives did not change
at all with the new firmware! So the change is only in the way FreeBSD
sees the drives.

My explanation for this change in drive ordering is, that my 9211-8i is
a SUN branded one (SGX-SAS6-INT-Z) and the server is a SUN server. So
maybe the original firmware contained some adaptations for this server,
that are missing in the new firmware.

Can the way FreeBSD orders scanned SAS drives be changed? If not, no
problem, as I use partition labels for my zfs pools and the disks are
also labeled on the server as well.

Regards,
Kai.
--
PGP-KeyID = 0x70654D7C4FB1F588
One day a lemming will fly..
Stephen Mcconnell via freebsd-scsi
2015-11-14 17:48:24 UTC
Permalink
-----Original Message-----
Sent: Saturday, November 14, 2015 7:31 AM
To: Kai Gallasch
Subject: Re: LSI SAS2008 mps driver preferred firmware version
Post by Kai Gallasch
Post by Royce Williams
mps0: Firmware: 19.00.00.00, Driver: 19.00.00.00-fbs
http://roycebits.blogspot.com/2015/01/freebsd-lsi-sas9211-8i-hba-fir
mware.html
Thanks! Lots of information about reflashing the 9211-8i.
So I upgraded the old firmare of the controller from
Firmware: 20.00.04.00, Driver: 20.00.00.00-fbsd (FreeBSD 10.2)
As I understand it the firmware 20.00.00.00 was pulled by avago and
replaced with the fixed version 20.00.04.00
I will give feedback if I notice any problems with this FW version.
As a side note: Flashing the 9211-8i to the new firmware version
<HITACHI HUS156030VLS600 A760> at scbus0 target 10 lun 0 (pass0,da0)
<HITACHI HUS156030VLS600 A5D0> at scbus0 target 11 lun 0 (pass1,da1)
<ATA INTEL SSDSC2BA10 0270> at scbus0 target 12 lun 0 (pass2,da2) <ATA
INTEL SSDSC2BA10 0270> at scbus0 target 13 lun 0 (pass3,da3) <WD
WD2001FYYG-01SL3 VR08> at scbus0 target 14 lun 0 (pass4,da4) <WD
WD2001FYYG-01SL3 VR08> at scbus0 target 15 lun 0 (pass5,da5) <WD
WD2001FYYG-01SL3 VR08> at scbus0 target 16 lun 0 (pass6,da6) <WD
WD2001FYYG-01SL3 VR08> at scbus0 target 17 lun 0 (pass7,da7) <WD
WD2001FYYG-01SL3 VR08> at scbus0 target 18 lun 0 (pass8,da8) <WD
WD2001FYYG-01SL3 VR08> at scbus0 target 19 lun 0 (pass9,da9) <WD
WD2001FYYG-01SL3 VR08> at scbus0 target 20 lun 0 (pass10,da10) <WD
WD2001FYYG-01SL3 VR08> at scbus0 target 21 lun 0 (pass11,da11) <SUN
HYDE12 0341> at scbus0 target 22 lun 0 (pass12,ses0) <AHCI SGPIO
Enclosure 1.00 0001> at scbus7 target 0 lun 0 (pass13,ses1)
The order is according to the order the disks are placed in the drive
bays: (da0, bay1; da1, bay2, ..)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 8 lun 0 (pass0,da0) <WD
WD2001FYYG-01SL3 VR08> at scbus0 target 9 lun 0 (pass1,da1) <WD
WD2001FYYG-01SL3 VR08> at scbus0 target 10 lun 0 (pass2,da2) <WD
WD2001FYYG-01SL3 VR08> at scbus0 target 11 lun 0 (pass3,da3) <WD
WD2001FYYG-01SL3 VR08> at scbus0 target 12 lun 0 (pass4,da4) <WD
WD2001FYYG-01SL3 VR08> at scbus0 target 13 lun 0 (pass5,da5) <WD
WD2001FYYG-01SL3 VR08> at scbus0 target 14 lun 0 (pass6,da6) <WD
WD2001FYYG-01SL3 VR08> at scbus0 target 15 lun 0 (pass7,da7) <ATA
INTEL SSDSC2BA10 0270> at scbus0 target 16 lun 0 (pass8,da8) <ATA
INTEL SSDSC2BA10 0270> at scbus0 target 17 lun 0 (pass9,da9) <HITACHI
HUS156030VLS600 A5D0> at scbus0 target 18 lun 0 (pass10,da10) <HITACHI
HUS156030VLS600 A760> at scbus0 target 19 lun 0 (pass11,da11) <SUN
HYDE12 0341> at scbus0 target 20 lun 0 (pass12,ses0) <AHCI SGPIO
Enclosure 1.00 0001> at scbus7 target 0 lun 0 (pass13,ses1)
So now the drive stuck in the last drive bay is seen as da0 and the
drive in the first drive bay as da11
But: In the controller BIOS the scan order of the drives did not
change at all with the new firmware! So the change is only in the way
FreeBSD sees the drives.
My explanation for this change in drive ordering is, that my 9211-8i
is a SUN branded one (SGX-SAS6-INT-Z) and the server is a SUN server.
So maybe the original firmware contained some adaptations for this
server, that are missing in the new firmware.
Can the way FreeBSD orders scanned SAS drives be changed? If not, no
problem, as I use partition labels for my zfs pools and the disks are
also labeled on the server as well.
You can do thinks in /boot/loader.conf to hard code bus and drive
assignments.
e.g.
hint.da.0.at="scbus0"
hint.da.0.target="19"
hint.da.0.unit="0"
hint.da.1.at="scbus0"
hint.da.1.target="18"
hint.da.1.unit="0"
See scsi(4) or cam(4) for more hints.
You're probably better off using GPT labels though, as they will survive
any
future disk order changes. The fact the target numbers changed means
that
loader.conf changes will fix the current issue but may not work properly
after
any future firmware updates.
Gary
The driver and card have a way of keeping the order of disks persistent
across reboots. Probably the reason that your drive order has changed is
when you flashed the new firmware on the card, the NVRAM that stores this
information on your card was erased. You can set your card up for either
disk persistent mapping or Enclosure/Slot mapping or you can turn mapping
off all together. When you boot up the first time, as disks are discovered
they are placed in the mapping table on the card and then kept in that
order forever, until the data is erased or mapping is turned off. So, I
would say it's possible that you do not have mapping turned on or it's
possible that the new firmware changed this setting from disk persistence
to Enclosure/Slot persistence or vice versa, or something like that.
Maybe too much information, but that's probably what happened.
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Slawa Olhovchenkov
2015-11-14 20:27:00 UTC
Permalink
Post by Kai Gallasch
So now the drive stuck in the last drive bay is seen as da0 and the
drive in the first drive bay as da11
But: In the controller BIOS the scan order of the drives did not change
at all with the new firmware! So the change is only in the way FreeBSD
sees the drives.
For ZFS this is not mater.
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Borja Marcos
2015-11-16 09:00:32 UTC
Permalink
You can do thinks in /boot/loader.conf to hard code bus and drive
assignments.
e.g.
hint.da.0.at="scbus0"
hint.da.0.target="19"
hint.da.0.unit="0"
hint.da.1.at="scbus0"
hint.da.1.target="18"
hint.da.1.unit="0"
Beware, the targer number assignment is not predictable. There's no guarantee especially if you replace
a disk.





Borja.

_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Freddie Cash
2015-11-16 19:40:12 UTC
Permalink
Post by Kevin Oberman
You can do thinks in /boot/loader.conf to hard code bus and drive
assignments.
e.g.
hint.da.0.at="scbus0"
hint.da.0.target="19"
hint.da.0.unit="0"
hint.da.1.at="scbus0"
hint.da.1.target="18"
hint.da.1.unit="0"
Beware, the target number assignment is not predictable. There's no
guarantee especially if you replace
a disk.
Borja.
As already mentioned, unless you are using zfs, use gpart to label you file
systems/disks. Then use the /dev/gpt/LABEL as the mount device in fstab.
​Even if you are using ZFS, labelling the drives with the location of the
disk in the system (enclosure, column, row, whatever) makes things so much
easier to work with when there are disk-related issues.

Just create a single partition that covers the whole disk, label it, and
use the label to create the vdevs in the pool.​
--
Freddie Cash
***@gmail.com
Slawa Olhovchenkov
2015-11-16 20:57:34 UTC
Permalink
Post by Freddie Cash
Post by Kevin Oberman
You can do thinks in /boot/loader.conf to hard code bus and drive
assignments.
e.g.
hint.da.0.at="scbus0"
hint.da.0.target="19"
hint.da.0.unit="0"
hint.da.1.at="scbus0"
hint.da.1.target="18"
hint.da.1.unit="0"
Beware, the target number assignment is not predictable. There's no
guarantee especially if you replace
a disk.
Borja.
As already mentioned, unless you are using zfs, use gpart to label you file
systems/disks. Then use the /dev/gpt/LABEL as the mount device in fstab.
​Even if you are using ZFS, labelling the drives with the location of the
disk in the system (enclosure, column, row, whatever) makes things so much
easier to work with when there are disk-related issues.
Just create a single partition that covers the whole disk, label it, and
use the label to create the vdevs in the pool.​
Bad idea.
Re-placed disk in different bay don't relabel automaticly.
Other issuse where disk placed in bay some remotely hands in data
center -- I am relay don't know how disk distributed by bays.
Best way for identify disk -- uses enclouse services.

I have many sites with ZFS on whole disk and some sites with ZFS on
GPT partition. ZFS on GPT more heavy for administration.
Freddie Cash
2015-11-16 21:19:55 UTC
Permalink
Post by Kevin Oberman
Post by Freddie Cash
Post by Kevin Oberman
As already mentioned, unless you are using zfs, use gpart to label you
file
Post by Freddie Cash
Post by Kevin Oberman
systems/disks. Then use the /dev/gpt/LABEL as the mount device in
fstab.
Post by Freddie Cash
​Even if you are using ZFS, labelling the drives with the location of the
disk in the system (enclosure, column, row, whatever) makes things so
much
Post by Freddie Cash
easier to work with when there are disk-related issues.
Just create a single partition that covers the whole disk, label it, and
use the label to create the vdevs in the pool.​
Bad idea.
Re-placed disk in different bay don't relabel automaticly.
​Did the original disk get labelled automatically? No, you had to do that
when you first started using it. So, why would you expect a replaced disk
to get labelled automatically?

Offline the dead/dying disk.
Physically remove the disk.
Insert the new disk.
Partition / label the new disk.
"zfs replace" using the new label to get it into the pool.​
Post by Kevin Oberman
Other issuse where disk placed in bay some remotely hands in data
center -- I am relay don't know how disk distributed by bays.
​You label the disks as they are added to the system the first time. That
way, you always know where each disk is located, and you only deal with the
labels.

Then, when you need to replace a disk (or ask someone in a remote location
to replace it) it's a simple matter: the label on the disk itself tells
you where the disk is physically located. And it doesn't change if the
controller decides to change the direction it enumerates devices.

Which is easier to tell someone in a remote location:
Replace disk enc0a6 (meaning enclosure 0, column A, row 6)?
or
Replace the disk called da36?​
​or
Find the disk with serial number XXXXXXXX?
or
Replace the disk where the light is (hopefully) flashing (but I can't
tell you which enclosure, front or back, or anything else like that)?

The first one lets you know exactly where the disk is located physically.

The second one just tells you the name of the device as determined by the
OS, but doesn't tell you anything about where it is located. And it can
change with a kernel update, driver update, or firmware update!

The third requires you to pull every disk in turn to read the serial number
off the drive itself.

In order for the second or third option to work, you'd have to write down
the device names and/or serial numbers and stick that onto the drive bay
itself.​
Post by Kevin Oberman
Best way for identify disk -- uses enclouse services.
​Only if your enclosure services are actually working (or even enabled).
I've yet to work on a box where that actually works (we custom-build our
storage boxes using OTS hardware).

Best way, IMO, is to use the physical location of the device as the actual
device name itself. That way, there's never any ambiguity at the physical
layer, the driver layer, the OS layer, or the ZFS pool layer.​
Post by Kevin Oberman
I have many sites with ZFS on whole disk and some sites with ZFS on
GPT partition. ZFS on GPT more heavy for administration.
​It's 1 extra step: partition the drive, supplying the location of the
drive as the label for the partition.

Everything else works exactly the same.

I used to do everything with whole drives and no labels. Did that for
about a month, until 2 separate drives on separate controllers died (in a
24-bay setup) and I couldn't figure out where they were located as a BIOS
upgrade changed which controller loaded first. And then I had to work on a
server that someone else configured with direct-attach bays (24 cables)
that were connected almost at random.

Then I used glabel(8) to label the entire disk, and things were much
better. But that didn't always play well with 4K drives, and replacing
drives that were the same size didn't always work as the number of sectors
in each disk was different (ZFS plays better with this now).

Then I started to GPT partition things, and life has been so much simpler.
All the partitions are aligned to 1 MB, and I can manually set the size of
the partition to work around different physical sector counts. All the
partitions are labelled using the physical location of the disk (originally
just row/column naming like a spreadsheet, but now I'm adding enclosure
name as well as we expand to multiple enclosures per system). It's so much
simpler now, ESPECIALLY when I have to get someone to do something
remotely. :)

​Everyone has their own way to manage things. I just haven't seen any
better setup than labelling the drives themselves using their physical
location.​
--
Freddie Cash
***@gmail.com
Patrick M. Hausen
2015-11-17 08:08:12 UTC
Permalink
Hi, all,
​You label the disks as they are added to the system the first time. That
way, you always know where each disk is located, and you only deal with the
labels.
we do the same for obvious reasons. But I always wonder about the possible
downsides, because ZFS documentation explicitly states:

ZFS operates on raw devices, so it is possible to create a storage pool comprised of logical
volumes, either software or hardware. This configuration is not recommended, as ZFS works
best when it uses raw physical devices. Using logical volumes might sacrifice performance,
reliability, or both, and should be avoided.

(from http://docs.oracle.com/cd/E19253-01/819-5461/gbcik/index.html)

Can anyone shed some lght on why not using raw devices might sacrifice
performance or reliability? Or is this just outdated folklore?

Thanks,
Patrick
--
punkt.de GmbH * Kaiserallee 13a * 76133 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
***@punkt.de http://www.punkt.de
Gf: JÃŒrgen Egeling AG Mannheim 108285
Miroslav Lachman
2015-11-17 08:22:56 UTC
Permalink
Post by Patrick M. Hausen
Hi, all,
Post by Freddie Cash
​You label the disks as they are added to the system the first time. That
way, you always know where each disk is located, and you only deal with the
labels.
we do the same for obvious reasons. But I always wonder about the possible
ZFS operates on raw devices, so it is possible to create a storage pool comprised of logical
volumes, either software or hardware. This configuration is not recommended, as ZFS works
best when it uses raw physical devices. Using logical volumes might sacrifice performance,
reliability, or both, and should be avoided.
(from http://docs.oracle.com/cd/E19253-01/819-5461/gbcik/index.html)
Can anyone shed some lght on why not using raw devices might sacrifice
performance or reliability? Or is this just outdated folklore?
It was on Solaris but not on FreeBSD. If you were using partitions on
Solaris the drive cache was disabled (or something like that, I am not
100% sure)

Miroslav Lachman
krad
2015-11-17 08:23:06 UTC
Permalink
From what i remember its a control thing. If you have another layer below
zfs, be it software based or hardware based, zfs cant be sure what is going
on, therefore cant guarantee anything. This is quite a big thing when it
comes to data integrity which is a big reason to use zfs. I remember having
to be very careful with some external caching arrays and making sure that
they flushed correctly as often they ignore the scsi flush commands. This
is one reason why I would always use the IT based firmware rather then the
RAID one, as its less likely to lead to issues.
Post by Patrick M. Hausen
Hi, all,
Post by Freddie Cash
​You label the disks as they are added to the system the first time.
That
Post by Freddie Cash
way, you always know where each disk is located, and you only deal with
the
Post by Freddie Cash
labels.
we do the same for obvious reasons. But I always wonder about the possible
ZFS operates on raw devices, so it is possible to create a storage
pool comprised of logical
volumes, either software or hardware. This configuration is not
recommended, as ZFS works
best when it uses raw physical devices. Using logical volumes
might sacrifice performance,
reliability, or both, and should be avoided.
(from http://docs.oracle.com/cd/E19253-01/819-5461/gbcik/index.html)
Can anyone shed some lght on why not using raw devices might sacrifice
performance or reliability? Or is this just outdated folklore?
Thanks,
Patrick
--
punkt.de GmbH * Kaiserallee 13a * 76133 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
Gf: Jürgen Egeling AG Mannheim 108285
krad
2015-11-17 08:32:33 UTC
Permalink
It was a control thing again, if you were using a partition another
application could be using the drive on another partition, therefore zfs
couldn't guarantee exclusive use of the disk so had to be more careful in
the way it operated the drive. I think this meant I went into write through
mode like you say.
Post by Miroslav Lachman
Post by Patrick M. Hausen
Hi, all,
Post by Freddie Cash
​You label the disks as they are added to the system the first time.
That
way, you always know where each disk is located, and you only deal with the
labels.
we do the same for obvious reasons. But I always wonder about the possible
ZFS operates on raw devices, so it is possible to create a
storage pool comprised of logical
volumes, either software or hardware. This configuration is not
recommended, as ZFS works
best when it uses raw physical devices. Using logical volumes
might sacrifice performance,
reliability, or both, and should be avoided.
(from http://docs.oracle.com/cd/E19253-01/819-5461/gbcik/index.html)
Can anyone shed some lght on why not using raw devices might sacrifice
performance or reliability? Or is this just outdated folklore?
It was on Solaris but not on FreeBSD. If you were using partitions on
Solaris the drive cache was disabled (or something like that, I am not 100%
sure)
Miroslav Lachman
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
Freddie Cash
2015-11-17 16:07:47 UTC
Permalink
Post by Patrick M. Hausen
Hi, all,
Post by Freddie Cash
​You label the disks as they are added to the system the first time.
That
Post by Freddie Cash
way, you always know where each disk is located, and you only deal with
the
Post by Freddie Cash
labels.
we do the same for obvious reasons. But I always wonder about the possible
ZFS operates on raw devices, so it is possible to create a storage
pool comprised of logical
volumes, either software or hardware. This configuration is not
recommended, as ZFS works
best when it uses raw physical devices. Using logical volumes
might sacrifice performance,
reliability, or both, and should be avoided.
(from http://docs.oracle.com/cd/E19253-01/819-5461/gbcik/index.html)
Can anyone shed some lght on why not using raw devices might sacrifice
performance or reliability? Or is this just outdated folklore?
​On Solaris, using raw devices allows ZFS to enable the caches on the disks
themselves, while using any kind of partitioning on the disk forces the
caches to be disabled.

This is not an issue on FreeBSD due to the way GEOM works. Caches on disks
are enabled regardless of how the disk is accessed (raw, dd-partitioned,
MBR-partitioned, GPT-partitioned, gnop, geli, whatever).

This is a common misconception and FAQ with ZFS on FreeBSD and one reason
to not take any Sun/Oracle documentation at face value, as it doesn't
always apply to FreeBSD.

There were several posts from pjd@ about this back in the 7.x days when ZFS
was first imported to FreeBSD.
--
Freddie Cash
***@gmail.com
krad
2015-11-17 08:37:14 UTC
Permalink
I disagree, get the remote hands to copy the serial number to an easily
visible location on the drive when its in the enclosure. Then label the
drives with the serial number (or a compatible version of it). That way the
label is tied to the drive, and you dont have to rely on the remote hands
100%. Better still do the physical labelling yourself
Post by Freddie Cash
Post by Kevin Oberman
Post by Freddie Cash
Post by Kevin Oberman
As already mentioned, unless you are using zfs, use gpart to label
you
Post by Kevin Oberman
file
Post by Freddie Cash
Post by Kevin Oberman
systems/disks. Then use the /dev/gpt/LABEL as the mount device in
fstab.
Post by Freddie Cash
​Even if you are using ZFS, labelling the drives with the location of
the
Post by Kevin Oberman
Post by Freddie Cash
disk in the system (enclosure, column, row, whatever) makes things so
much
Post by Freddie Cash
easier to work with when there are disk-related issues.
Just create a single partition that covers the whole disk, label it,
and
Post by Kevin Oberman
Post by Freddie Cash
use the label to create the vdevs in the pool.​
Bad idea.
Re-placed disk in different bay don't relabel automaticly.
​Did the original disk get labelled automatically? No, you had to do that
when you first started using it. So, why would you expect a replaced disk
to get labelled automatically?
Offline the dead/dying disk.
Physically remove the disk.
Insert the new disk.
Partition / label the new disk.
"zfs replace" using the new label to get it into the pool.​
Post by Kevin Oberman
Other issuse where disk placed in bay some remotely hands in data
center -- I am relay don't know how disk distributed by bays.
​You label the disks as they are added to the system the first time. That
way, you always know where each disk is located, and you only deal with the
labels.
Then, when you need to replace a disk (or ask someone in a remote location
to replace it) it's a simple matter: the label on the disk itself tells
you where the disk is physically located. And it doesn't change if the
controller decides to change the direction it enumerates devices.
Replace disk enc0a6 (meaning enclosure 0, column A, row 6)?
or
Replace the disk called da36?​
​or
Find the disk with serial number XXXXXXXX?
or
Replace the disk where the light is (hopefully) flashing (but I can't
tell you which enclosure, front or back, or anything else like that)?
The first one lets you know exactly where the disk is located physically.
The second one just tells you the name of the device as determined by the
OS, but doesn't tell you anything about where it is located. And it can
change with a kernel update, driver update, or firmware update!
The third requires you to pull every disk in turn to read the serial number
off the drive itself.
In order for the second or third option to work, you'd have to write down
the device names and/or serial numbers and stick that onto the drive bay
itself.​
Post by Kevin Oberman
Best way for identify disk -- uses enclouse services.
​Only if your enclosure services are actually working (or even enabled).
I've yet to work on a box where that actually works (we custom-build our
storage boxes using OTS hardware).
Best way, IMO, is to use the physical location of the device as the actual
device name itself. That way, there's never any ambiguity at the physical
layer, the driver layer, the OS layer, or the ZFS pool layer.​
Post by Kevin Oberman
I have many sites with ZFS on whole disk and some sites with ZFS on
GPT partition. ZFS on GPT more heavy for administration.
​It's 1 extra step: partition the drive, supplying the location of the
drive as the label for the partition.
Everything else works exactly the same.
I used to do everything with whole drives and no labels. Did that for
about a month, until 2 separate drives on separate controllers died (in a
24-bay setup) and I couldn't figure out where they were located as a BIOS
upgrade changed which controller loaded first. And then I had to work on a
server that someone else configured with direct-attach bays (24 cables)
that were connected almost at random.
Then I used glabel(8) to label the entire disk, and things were much
better. But that didn't always play well with 4K drives, and replacing
drives that were the same size didn't always work as the number of sectors
in each disk was different (ZFS plays better with this now).
Then I started to GPT partition things, and life has been so much simpler.
All the partitions are aligned to 1 MB, and I can manually set the size of
the partition to work around different physical sector counts. All the
partitions are labelled using the physical location of the disk (originally
just row/column naming like a spreadsheet, but now I'm adding enclosure
name as well as we expand to multiple enclosures per system). It's so much
simpler now, ESPECIALLY when I have to get someone to do something
remotely. :)
​Everyone has their own way to manage things. I just haven't seen any
better setup than labelling the drives themselves using their physical
location.​
--
Freddie Cash
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
Slawa Olhovchenkov
2015-11-18 10:25:02 UTC
Permalink
Post by Freddie Cash
Post by Kevin Oberman
Post by Freddie Cash
Post by Kevin Oberman
As already mentioned, unless you are using zfs, use gpart to label you
file
Post by Freddie Cash
Post by Kevin Oberman
systems/disks. Then use the /dev/gpt/LABEL as the mount device in
fstab.
Post by Freddie Cash
​Even if you are using ZFS, labelling the drives with the location of the
disk in the system (enclosure, column, row, whatever) makes things so
much
Post by Freddie Cash
easier to work with when there are disk-related issues.
Just create a single partition that covers the whole disk, label it, and
use the label to create the vdevs in the pool.​
Bad idea.
Re-placed disk in different bay don't relabel automaticly.
​Did the original disk get labelled automatically? No, you had to do that
when you first started using it. So, why would you expect a
replaced disk
Initial labeling is problem too.
For new chassis with 36 identical disk (already installed) -- what is
simple way to labeling disks?
Post by Freddie Cash
to get labelled automatically?
Consistency keeping is another problem.
Post by Freddie Cash
Offline the dead/dying disk.
Physically remove the disk.
Insert the new disk.
Partition / label the new disk.
"zfs replace" using the new label to get it into the pool.​
New disk can be inserted in free bay.
This is may be done by remote hand.
And I can be miss information where disk is placed.
Post by Freddie Cash
Post by Kevin Oberman
Other issuse where disk placed in bay some remotely hands in data
center -- I am relay don't know how disk distributed by bays.
​You label the disks as they are added to the system the first time. That
way, you always know where each disk is located, and you only deal with the
labels.
Then, when you need to replace a disk (or ask someone in a remote location
to replace it) it's a simple matter: the label on the disk itself tells
you where the disk is physically located. And it doesn't change if the
controller decides to change the direction it enumerates devices.
"Replace disk in bay with blinked led"

Author: bapt
Date: Sat Sep 5 00:06:01 2015
New Revision: 287473
URL: https://svnweb.freebsd.org/changeset/base/287473

Log:
Add a new sesutil(8) utility

This is an utility for managing SCSI Enclosure Services (SES)
device.

For now only one command is supported "locate" which will change the
test of the
external LED associated to a given disk.

Usage if the following:
sesutil locate disk [on|off]

Disk can be a device name: "da12" or a special keyword: "all".
Post by Freddie Cash
Replace disk enc0a6 (meaning enclosure 0, column A, row 6)?
or
Replace the disk called da36?​
​or
Find the disk with serial number XXXXXXXX?
or
Replace the disk where the light is (hopefully) flashing (but I can't
tell you which enclosure, front or back, or anything else like that)?
The first one lets you know exactly where the disk is located physically.
The second one just tells you the name of the device as determined by the
OS, but doesn't tell you anything about where it is located. And it can
change with a kernel update, driver update, or firmware update!
The third requires you to pull every disk in turn to read the serial number
off the drive itself.
Usaly serial number can be read w/o pull disk (for SuperMicro cases
this is true, remote hand replaced disk by S/N for me w/o pull every disk).
Post by Freddie Cash
In order for the second or third option to work, you'd have to write down
the device names and/or serial numbers and stick that onto the drive bay
itself.​
Post by Kevin Oberman
Best way for identify disk -- uses enclouse services.
​Only if your enclosure services are actually working (or even enabled).
I've yet to work on a box where that actually works (we custom-build our
storage boxes using OTS hardware).
Best way, IMO, is to use the physical location of the device as the actual
device name itself. That way, there's never any ambiguity at the physical
layer, the driver layer, the OS layer, or the ZFS pool layer.​
Post by Kevin Oberman
I have many sites with ZFS on whole disk and some sites with ZFS on
GPT partition. ZFS on GPT more heavy for administration.
​It's 1 extra step: partition the drive, supplying the location of the
drive as the label for the partition.
Everything else works exactly the same.
I used to do everything with whole drives and no labels. Did that for
about a month, until 2 separate drives on separate controllers died (in a
24-bay setup) and I couldn't figure out where they were located as a BIOS
upgrade changed which controller loaded first. And then I had to work on a
server that someone else configured with direct-attach bays (24 cables)
that were connected almost at random.
All currently used by me servers have some randoms in detecting and
reporting controllers and HDDs. No problem for ZFS and/or replacing by
remote hands (by S/N).
Freddie Cash
2015-11-18 16:15:15 UTC
Permalink
Post by Slawa Olhovchenkov
Post by Freddie Cash
​Did the original disk get labelled automatically? No, you had to do
that
Post by Freddie Cash
when you first started using it. So, why would you expect a
replaced disk
Initial labeling is problem too.
For new chassis with 36 identical disk (already installed) -- what is
simple way to labeling disks?
​That's the easy part. Boot with all the drives pulled out a bit, so they
aren't connected/detected.

Insert first disk, wait for it to be detected and get a /dev node, then
partition/label it. Repeat for each disk. Takes about 5 minutes to label
a 45-bay JBOD chassis.

No different than how you would get the serial number off each disk before
inserting them into the chassis, so you'd know for sure which slot they're
in.

"Replace disk in bay with blinked led"
Post by Slawa Olhovchenkov
Author: bapt
Date: Sat Sep 5 00:06:01 2015
​And, how did you manage to do that before Sep 5, 2015?​

Usaly serial number can be read w/o pull disk (for SuperMicro cases
Post by Slawa Olhovchenkov
this is true, remote hand replaced disk by S/N for me w/o pull every disk).
​How? We have all SuperMicro storage chassis (SC2xx, SC8xx, and JBODs) and
server chassis in our data centre here. None of them allow you to read the
serial number off the physical disk without pulling the disk out
completely.​ You'd have to manually label each bay with the serial number
before inserting the disk into the chassis ... which is no different from
labelling the device in the OS. Except it's much faster to find a 3D
co-ordinate (enc0a6) than to scan every bay looking for a specific serial
number.

But, to each their own. :) Everyone has their "perfect" system that works
for them. :D
--
Freddie Cash
***@gmail.com
Slawa Olhovchenkov
2015-11-18 16:54:16 UTC
Permalink
Post by Freddie Cash
Post by Slawa Olhovchenkov
Post by Freddie Cash
​Did the original disk get labelled automatically? No, you had to do
that
Post by Freddie Cash
when you first started using it. So, why would you expect a replaced disk
Initial labeling is problem too.
For new chassis with 36 identical disk (already installed) -- what is
simple way to labeling disks?
​That's the easy part. Boot with all the drives pulled out a bit, so they
aren't connected/detected.
Insert first disk, wait for it to be detected and get a /dev node, then
partition/label it. Repeat for each disk. Takes about 5 minutes to label
a 45-bay JBOD chassis.
Hmm, from me to server more then 1700km, how I can do this?
Post by Freddie Cash
No different than how you would get the serial number off each disk before
inserting them into the chassis, so you'd know for sure which slot they're
in.
This is do by manufacturer.
Or in DC after service ordering.
I am don't assemble servers, in general.
And I am don't see servers and don't know how they look.
Post by Freddie Cash
"Replace disk in bay with blinked led"
Post by Slawa Olhovchenkov
Author: bapt
Date: Sat Sep 5 00:06:01 2015
​And, how did you manage to do that before Sep 5, 2015?​
Deteched disk don't blink activity LED.
Post by Freddie Cash
Usaly serial number can be read w/o pull disk (for SuperMicro cases
Post by Slawa Olhovchenkov
this is true, remote hand replaced disk by S/N for me w/o pull every disk).
​How? We have all SuperMicro storage chassis (SC2xx, SC8xx, and JBODs) and
server chassis in our data centre here. None of them allow you to read the
serial number off the physical disk without pulling the disk out
completely.​ You'd have to manually label each bay with the serial number
before inserting the disk into the chassis ... which is no different from
labelling the device in the OS. Except it's much faster to find a 3D
co-ordinate (enc0a6) than to scan every bay looking for a specific serial
number.
For SC847A this do for me in NL DC (as I understand -- through holes
at an angle).
Post by Freddie Cash
But, to each their own. :) Everyone has their "perfect" system that works
for them. :D
--
Freddie Cash
Kevin Oberman
2015-11-16 19:36:21 UTC
Permalink
You can do thinks in /boot/loader.conf to hard code bus and drive
assignments.
e.g.
hint.da.0.at="scbus0"
hint.da.0.target="19"
hint.da.0.unit="0"
hint.da.1.at="scbus0"
hint.da.1.target="18"
hint.da.1.unit="0"
Beware, the target number assignment is not predictable. There's no
guarantee especially if you replace
a disk.
Borja.
As already mentioned, unless you are using zfs, use gpart to label you file
systems/disks. Then use the /dev/gpt/LABEL as the mount device in fstab.
--
Kevin Oberman, Part time kid herder and retired Network Engineer
E-mail: ***@gmail.com
PGP Fingerprint: D03FB98AFA78E3B78C1694B318AB39EF1B055683
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Loading...