Hi All,
Any of you guys know of a PCIe card that will do hardware RAID 1 with two NVMe drives?
I have found some, but they are way to elaborate, and as such, way too expensive.
Many thanks, -T
On 2022-06-23 23:04, ToddAndMargo via users wrote:
Any of you guys know of a PCIe card that will do hardware RAID 1 with two NVMe drives?
I have found some, but they are way to elaborate, and as such, way too expensive.
Maybe you have a weird hardware requirement that explains the question, but why do you want a RAID-1 card? Software RAID is much cheaper, faster, uses less power and is considerably more reliable. If you run BTRFS or ZFS, then it is also easier on the drives, and recovery times are hundreds of times lower than what you can do with hardware. That translates directly into hundreds of times better reliability numbers. Best of all, you do not need matching drives or drive sizes to implement RAID-1. For a long time now, the only reason to run hardware RAID has been underneath back-level versions of Windows or VMware.
--
John Mellor
On Fri, Jun 24, 2022 at 7:35 AM John Mellor john.mellor@gmail.com wrote:
On 2022-06-23 23:04, ToddAndMargo via users wrote:
Any of you guys know of a PCIe card that will do hardware RAID 1 with two NVMe drives?
I have found some, but they are way to elaborate, and as such, way too expensive.
Maybe you have a weird hardware requirement that explains the question, but why do you want a RAID-1 card? Software RAID is much cheaper, faster, uses less power and is considerably more reliable. If you run BTRFS or ZFS, then it is also easier on the drives, and recovery times are hundreds of times lower than what you can do with hardware. That translates directly into hundreds of times better reliability numbers. Best of all, you do not need matching drives or drive sizes to implement RAID-1. For a long time now, the only reason to run hardware RAID has been underneath back-level versions of Windows or VMware.
Agreed. I have a 4x4gb Seagate Terrascale drive array running BTRFS in RAID1. Until it starts to fill up, I'm going to leave it RAID1 while BTRFS continues to improve RAID5 :)
And then I can convert it on the fly!
Thanks, Richard
On 6/24/22 05:35, John Mellor wrote:
On 2022-06-23 23:04, ToddAndMargo via users wrote:
Any of you guys know of a PCIe card that will do hardware RAID 1 with two NVMe drives?
I have found some, but they are way to elaborate, and as such, way too expensive.
Maybe you have a weird hardware requirement that explains the question, but why do you want a RAID-1 card? Software RAID is much cheaper, faster, uses less power and is considerably more reliable. If you run BTRFS or ZFS, then it is also easier on the drives, and recovery times are hundreds of times lower than what you can do with hardware. That translates directly into hundreds of times better reliability numbers. Best of all, you do not need matching drives or drive sizes to implement RAID-1. For a long time now, the only reason to run hardware RAID has been underneath back-level versions of Windows or VMware.
--
John Mellor
Hi John,
I am working with a customer's motherboard where iRSTe is defective. So I just want to chuck it and get hardware RAID,
Otherwise, you are correct.
-T
Raid cards of different brands wont typically read each others configs. So any replacement card would need to be the same card, or maybe just manufacturer to read the current config and disks.
On Fri, Jun 24, 2022, 4:40 AM ToddAndMargo via users < users@lists.fedoraproject.org> wrote:
On 6/24/22 05:35, John Mellor wrote:
On 2022-06-23 23:04, ToddAndMargo via users wrote:
Any of you guys know of a PCIe card that will do hardware RAID 1 with two NVMe drives?
I have found some, but they are way to elaborate, and as such, way too expensive.
Maybe you have a weird hardware requirement that explains the question, but why do you want a RAID-1 card? Software RAID is much cheaper, faster, uses less power and is considerably more reliable. If you run BTRFS or ZFS, then it is also easier on the drives, and recovery times are hundreds of times lower than what you can do with hardware. That translates directly into hundreds of times better reliability numbers. Best of all, you do not need matching drives or drive sizes to implement RAID-1. For a long time now, the only reason to run hardware RAID has been underneath back-level versions of Windows or VMware.
--
John Mellor
Hi John,
I am working with a customer's motherboard where iRSTe is defective. So I just want to chuck it and get hardware RAID,
Otherwise, you are correct.
-T _______________________________________________ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
On Fri, Jun 24, 2022 at 2:10 PM ToddAndMargo via users users@lists.fedoraproject.org wrote:
On 6/24/22 10:42, Roger Heflin wrote:
Raid cards of different brands wont typically read each others configs. So any replacement card would need to be the same card, or maybe just manufacturer to read the current config and disks.
Yikes!
With that said, there are really not many makes of cards. And their cards are usually based on the same model. Other cards are really not true hardware raid controllers. You can find out by getting the model number from lspci.
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
On 6/24/22 11:16, Mauricio Tavares wrote:
On Fri, Jun 24, 2022 at 2:10 PM ToddAndMargo via users users@lists.fedoraproject.org wrote:
On 6/24/22 10:42, Roger Heflin wrote:
Raid cards of different brands wont typically read each others configs. So any replacement card would need to be the same card, or maybe just manufacturer to read the current config and disks.
Yikes!
With that said, there are really not many makes of cards. And
their cards are usually based on the same model. Other cards are really not true hardware raid controllers. You can find out by getting the model number from lspci.
And there is always Drive Savers
Odds are with a bit of work you can find the start of the dara and so that you can mount it. Witb raid1 it should be doable. With raid5/6 you need to know details about the algorithm used by the card. I think there is a disk scanning tool that can be used to find the lvm and or fs header and from that you could use dmsetup to create a device srarting at the right spot for the header to work (without putting a partition on the device). I use software raid just so i do not need the same type of raid card to read it on a new or repaired system
On Fri, Jun 24, 2022, 10:38 AM ToddAndMargo via users < users@lists.fedoraproject.org> wrote:
On 6/24/22 11:16, Mauricio Tavares wrote:
On Fri, Jun 24, 2022 at 2:10 PM ToddAndMargo via users users@lists.fedoraproject.org wrote:
On 6/24/22 10:42, Roger Heflin wrote:
Raid cards of different brands wont typically read each others configs. So any replacement card would need to be the same card, or maybe just manufacturer to read the current config and disks.
Yikes!
With that said, there are really not many makes of cards. And
their cards are usually based on the same model. Other cards are really not true hardware raid controllers. You can find out by getting the model number from lspci.
And there is always Drive Savers
https://drivesaversdatarecovery.com/ _______________________________________________ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
On Fri, Jun 24, 2022 at 08:35:31AM -0400, John Mellor wrote:
On 2022-06-23 23:04, ToddAndMargo via users wrote:
Any of you guys know of a PCIe card that will do hardware RAID 1 with two NVMe drives?
I have found some, but they are way to elaborate, and as such, way too expensive.
Maybe you have a weird hardware requirement that explains the question, but why do you want a RAID-1 card?� Software RAID is much cheaper, faster, uses less power and is considerably more reliable.� If you run BTRFS or ZFS, then it is also easier on the drives, and recovery times are hundreds of times lower than what you can do with hardware.� That translates directly into hundreds of times better reliability numbers.� Best of all, you do not need matching drives or drive sizes to implement RAID-1.� For a long time now, the only reason to run hardware RAID has been underneath back-level versions of Windows or VMware.
When I last tried about two years ago, using RAID-5 via LVM with no file system I was unable to come close to maximum performance for sequential only writes, where you do not need to read data off of the disks - it was still reading and writing.
I couldn't find any details on how to get this to work or if it was possible.
I tried tuning various settings (I can't recall specifics, but generally increasing the memory it uses) but with little change in performance.
-- Patrick
On Fri, Jun 24, 2022 at 12:05 AM ToddAndMargo via users < users@lists.fedoraproject.org> wrote:
Hi All,
Any of you guys know of a PCIe card that will do hardware RAID 1 with two NVMe drives?
I have found some, but they are way to elaborate, and as such, way too expensive.
I expect the market for hardware RAID cards that don't improve on software RAID is too small to be commercially viable.
On 6/24/22 08:09, George N. White III wrote:
On Fri, Jun 24, 2022 at 12:05 AM ToddAndMargo via users <users@lists.fedoraproject.org mailto:users@lists.fedoraproject.org> wrote:
Hi All, Any of you guys know of a PCIe card that will do hardware RAID 1 with two NVMe drives? I have found some, but they are way to elaborate, and as such, way too expensive.
I expect the market for hardware RAID cards that don't improve on software RAID is too small to be commercially viable.
Ya, no fooling. This is also what I am finding.
I have been looking at the
https://www.highpoint-tech.com/product-page/ssd7202
But am loath to pay $300 for it.
I am going to try clearing the cmos on this motherboard and starting fresh to see it the iRSTe finally wakes up
On Thu, Jun 23, 2022 at 11:05 PM ToddAndMargo via users users@lists.fedoraproject.org wrote:
Hi All,
Any of you guys know of a PCIe card that will do hardware RAID 1 with two NVMe drives?
I have found some, but they are way to elaborate, and as such, way too expensive.
Many thanks, -T
Which NVMe style drives: U2 or M2?
I have used LSI, the Dell controllers (which can run in other Linux boxes), and Areca, primarily on the U2 format. It depends on how comfortable you are, but I have found PCIe controllers that worked fine on ebay. In fact, I even got a NVMe hotswap cage before.
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
On 6/24/22 09:24, Mauricio Tavares wrote:
On Thu, Jun 23, 2022 at 11:05 PM ToddAndMargo via users users@lists.fedoraproject.org wrote:
Hi All,
Any of you guys know of a PCIe card that will do hardware RAID 1 with two NVMe drives?
I have found some, but they are way to elaborate, and as such, way too expensive.
Many thanks, -T
Which NVMe style drives: U2 or M2?
M.2: Samsung 980's
On 6/23/22 20:04, ToddAndMargo via users wrote:
Hi All,
Any of you guys know of a PCIe card that will do hardware RAID 1 with two NVMe drives?
I have found some, but they are way to elaborate, and as such, way too expensive.
Many thanks, -T
Hi All,
The one I finally landed on was the High Point SSD 7202. This is a complication of questions I asked High Point.
Thank you all for the help and tips!
-T
High Point SSD7202 Hardware RAID Card Q&A:
Product page: https://www.highpoint-tech.com/gen3-nvme-m2-bootable
1) Is the SSD7202 EUFI bootable? Yes
2) What is the warranty? Two years.
Note: they do not perform out of waranty repainrs for a fee. But they do give customer loyalty discoutns
3) What is the life expectancy (in years) of the cooling fan? 70,000 continuous at 40C
Fan spec's:
https://u.pcloud.link/publink/show?code=XZk5lLVZ76OQcrkuio4kkNFfkTLXvQy5IpyX
4) Is the cooling fan replaceable? "Supposedly" (Details were not forthcoming)
5) Does this work on Windows 11? Yes, SSD7202 supports windows 11.
6) Is there a configuration utility at boot? Yes, at UEFI.
7) Is there a Windows 11 utility to manage the 7202? https://www.highpoint-tech.com/gen3-nvme-m2-bootable Click on the "Downloads" tab. Look for the "Software Downloads" Title
8) Is there Linux support? https://www.highpoint-tech.com/gen3-nvme-m2-bootable Click on the "Linux Updates" tab.
As of 2022-07-02: CentOS 8.3/8.2/8.1/7.7/7.8/7.9 Debian 10.4.0/10.5.0/10.6.0/10.7.0/10.8.0 Ubuntu 20.10 / 20.04 / 18.04 Red Hat 8.3/8.4
9) Where is the driver download page? https://www.highpoint-tech.com/gen3-nvme-m2-bootable Click on the "Downloads" tab. Look for the "Software Downloads" Title
10) Where is the manual download page? https://www.highpoint-tech.com/gen3-nvme-m2-bootable Click on the "Downloads" tab. Look for the "Documentation" Title
11) Is the configuration information for the RAID volume on the drive itself or on the card?
The configuration information of the raid volume is on the drive, and the card supports raid roaming. Data will not be lost as long as the drive is not damaged.
On Thu, Jun 23, 2022 at 11:05 PM ToddAndMargo via users users@lists.fedoraproject.org wrote:
Hi All,
Any of you guys know of a PCIe card that will do hardware RAID 1 with two NVMe drives?
I have found some, but they are way to elaborate, and as such, way too expensive.
I'm really not certain how sophisticated or reliable either PCIe or NVMe is with respect to error reporting. Or even if it varies by make/model. My understanding is that internally it has to be good because your data isn't really stored in any recognizable form on solid state drives, it's a "probabilistic representation of your data" and requires really sophisticated encoding/decoding to "almost certainly" return your data. But when that doesn't happen, curiously it (anecdotally) seems rare to get discrete read errors like we see with hard drives. Common instead, the drive returns garbage or zeros instead of your data. This is where btrfs shines, in general, but really shines in the raid1 configuration.
In the normal single drive configuration, Btrfs will verbosely complain. It has limited ability to correct when the metadata profile is dup (two copies of the file system on one drive), which is the mkfs default since btrfs-progs ~5.15. For various reasons, even dup might have two bad copies on a single SSD.
But in the raid1 configuration (two copies on different devices), Btrfs can unambiguously determine on every read whether data or metadata is wrong, and grab the good copy from the other drive, and overwrite the bad copy. And this is all automatic. You can see the same scary verbose message in dmesg, but you'll see additional messages for the fixups. Fixup also happens during scrub, useful for the areas that aren't regularly read.
Conversely, any hardware, mdadm, or LVM RAID depends on the hardware reporting a read error. If garbage or zeros are returned, the RAID can't do anything about it. [1]
Sounds great. So why not btrfs raid1? Well, right now the code that handles degraded mdadm RAID is all in dracut (in the initramfs). The initramfs contains dracut scripts that try to assemble the RAID and if a drive is missing, it won't assemble, so the scripts know to start a loop to wait for about 3 minutes, and then attempt a degraded assemble. But dracut doesn't handle Btrfs in the same situation, and no one has done the work so far to make it possible. If a drive flat out dies, what happens at boot time is you get an indefinite wait for the device to appear, because of a udev rule that requires waiting for all Btrfs devices to appear before mount is attempted. That's good because we don't want to prematurely try to do a normal or degraded mount. Anyway, this area needs development work. So if your use case requires unattended boot when a drive has failed, this set up is not for you.
So those are the current trade offs.
[1] There's experimental dm-integrity support via cryptsetup. It works rather differently than Btrfs, but has the ability to detect such corruption problems and report them to the upper layer as a read error where the normal RAID error correction can then work properly.
users@lists.stg.fedoraproject.org