The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.

Precision 7550 & 7750 Owners' Thread

Discussion in 'Dell Latitude, Vostro, and Precision' started by SlurpJug, May 30, 2020.

  1. Aaron44126

    Aaron44126 Notebook Prophet

    Reputations:
    874
    Messages:
    5,548
    Likes Received:
    2,058
    Trophy Points:
    331
    I don’t know about the touchpad button, but regarding the fingerprint reader, my understanding is that the built-in ControlVault fingerprint reader has never worked in Linux. It’s not a new thing with this generation. The driver has just been never been developed. For a while, they wouldn’t even let you order a system with both Linux and the fingerprint reader because of the incompatibility.
     
  2. oscarf

    oscarf Newbie

    Reputations:
    0
    Messages:
    7
    Likes Received:
    1
    Trophy Points:
    6
    I ordered the 7550 ( i7-10875, 4k UHD). I'll replace the factory memory first with 64 GB (2x32 GB) and later 128 GB (4x32 GB). What memory is known to be compatible? My current alternatives:
    Crucial 64GB Kit (2 x 32GB) DDR4-3200 SODIMM DDR4 PC4-25600 • CL=22
    64 GB Kingston HyperX Impact SO-DIMM Kit 32GB, DDR4-2933, CL17-19-19.
    Are these compatible? Are there additional alternatives?
     
  3. skarp

    skarp Newbie

    Reputations:
    0
    Messages:
    5
    Likes Received:
    1
    Trophy Points:
    6
    Hello,

    I would like to know if anyone already has experience to configure RAID 0 on Precision 7550 or earlier versions.
    What I want to know is have you seen performance gains with RAID 0?

    In theory it can be twice as fast as a single drive for reading/writing and in practice there can be a speed increase of like 50%...

    The reason for this question is that I saw this video from Josh (), where we can see that raid 0 doesn't make sense, because the CPU is not receiving all the bandwidth.

    I ordered my 7550 with only 250gb drive so I am waiting to receive also my order of 2 Samsung 970 Evo Plus drives to replace/upgrade it. Sure, I can test myself, and that’s what I will do I guess, but if anyone has any experience already, it would be great to read up on it.

    My laptop will have a backup every 2 days via 10gb local network on NAS (also all projects are pushed to git clouds multiple times per day), so data loss is not a big deal, that’s why I am looking to make a RAID 0 and not RAID 1.

    FYI, on my Precision 5820 home PC, I have a RAID 1 with 2 Samsung 970 Pro NVMe drives on Dell’s ultra-speed PCIe quad card, and with VROC key installed, I have like 6300mbs sequential read speed…
     
    Last edited: Jul 20, 2020
  4. Scott Jann

    Scott Jann Newbie

    Reputations:
    0
    Messages:
    8
    Likes Received:
    5
    Trophy Points:
    6
    I got mine with the same CPU, and thinking along your same lines, I bought some memory to add to it. Looking at the specs for the CPU, it only supports 2933MHz memory, so I tried to find that over 3200MHz memory, of which there weren't a lot of options. I bought 2x32 GB Nemix DDR4-2933 MHz and despite the specifications in the manual saying that any 32GB 2933MHz modules should be supported, I was met with 2 flashes of amber and 5 flashes of white with the RAM installed, invalid memory. So, avoid Nemix memory!

    I just ordered a pair of Crucial sticks to replace it (CT2K32G4SFD832A), hopefully they work. Searching the 7550 on their website brings them up, so I'm hopeful.

     
  5. Scott Jann

    Scott Jann Newbie

    Reputations:
    0
    Messages:
    8
    Likes Received:
    5
    Trophy Points:
    6
    This is true, I just got a 7550 with the fingerprint reader and ubuntu, so you certainly can get it now. lsusb shows that it is a
    Broadcom 0a5c:5843 which has no support in fprint under linux and has been in many model laptops, so we're not alone.

    I complained to Dell Support already and my agent confirmed that it would need driver support from Broadcom and Dell can't do anything about it (besides using a different sensor, of course –me), but then he said there should be a driver for it sometime this year. I'm not sure where that came from, or if it is true, hopefully he knew something we don't.

     
  6. Aaron44126

    Aaron44126 Notebook Prophet

    Reputations:
    874
    Messages:
    5,548
    Likes Received:
    2,058
    Trophy Points:
    331
    With prior systems, the results have been no real performance gain with RAID 0 on NVMe in these systems. All of the NVMe drive slots share PCIe bandwidth with lanes going through the PCH. (RAID 0 could still be handy just for the simplicity of having multiple NVMe drives show up as one large volume... I'm planning to set a system up this way next time I upgrade.)
     
    skarp likes this.
  7. SRom

    SRom Notebook Enthusiast

    Reputations:
    0
    Messages:
    34
    Likes Received:
    6
    Trophy Points:
    16
    Just a small question what I have been wondering for a while. All the drives share the same bandwidth so there is no max speed benefits seems to be general opinion and logically that sounds correct, but how about those slower small random reads? Aren't those limited by the drive itself and not by the bandwidth, I mean when speed is originally just some hundreds or tens of megabits? If two drives are working together and picking up this small data, shouldn't it be faster or maybe even if it is faster there are no benefits from it?
     
  8. Aaron44126

    Aaron44126 Notebook Prophet

    Reputations:
    874
    Messages:
    5,548
    Likes Received:
    2,058
    Trophy Points:
    331
    Maybe... doing some tests would be interesting.

    I don't think that it would be much better. This is how I look at it. With a random read request, you are spending more time waiting for the drive to get the data ready than you are waiting for the data to be transferred across the interface. I don't want to call it a "seek delay" like you would have with a hard drive because there is no physical head to move, but it is sort of analogous. Let's say you were requesting a bit of data smaller than the stripe size. In that case, only one drive would be served with a read request, and the result wouldn't be any better than if you were not using RAID at all. Even if multiple drives were hit with a slightly larger request, the wait time would dominate the transfer time so the extra bandwidth wouldn't be tremendously helpful.

    Maybe it would be helpful if random read requests were issued in parallel so that different drives could be working on different reads at the same time. I don't think that many programs work like that though (except maybe in benchmark scenarios) ... "random" disk I/O is generally handled serially, one request after another, waiting for the result to come back in between. Reading or writing a large bulk of "contiguous" data can be handled more in a parallel fashion (automatically by the OS I/O buffer manager) but then we are not talking about "random reads".
     
    zhongze12345 and alaskajoel like this.
  9. skarp

    skarp Newbie

    Reputations:
    0
    Messages:
    5
    Likes Received:
    1
    Trophy Points:
    6
    Thank you for this informative answer.

    Having only one large volume is a good point and this time I will configure my system to have one volume as well. It is better to access all directories from one unique root point and avoid lines of code to change drives when creating scripts for Windows, where you need to access 2 different drives.

    If RAID 0 doesn't increase performance, I think I'll avoid it. To have 1 large volume, I will simply mount my second drive in an empty NTFS folder via the disk management utility in Windows. Non hardware RAIDs always add a layer of complexity to access data.

    Not doing the general Intel RST RAID 0 will also allow mirroring just one partition of a drive to have redundancy for some sensitive data. With Windows Disk Utility, we can now create a partition on one drive and just mirror that partition on the second drive. On the attached picture (pic 1) the red partitions are mirrored partitions (this is my old setup from my old system).

    Now regarding the original question. I cannot find in any DELL documentation the maximum allowed bandwidth for NVMe drives. But for a single model, precision 7740, I found something interesting.

    On this technical guide (pic 2, https://topics-cdn.dell.com/pdf/precision-17-7740-laptop_owners-manual_en-us.pdf page 14), we can find the maximum storage speed, and it's up to 32 Gbps for 4 NVMe slots. So, it seems that on this system all the lanes are separated and do not share the bandwidth. If we divide 32 by four, we get 8 Gbps per NVMe slot, which is a bit strange in my opinion.

    If we search the web we can see that the maximum speed for PCIe v3 X4 is equal to 3.94 Gbps (pic 3), so for 7740 accuracy having 32 Gbps with 4 SSDs is a bit confusing for me.

    What do you think of that?

    It will be great to find out that on 7550 also the lanes are not shared, only in this case RAID 0 will make sense imho ...

    [​IMG]
    [​IMG]
    [​IMG]
     
    Last edited: Jul 21, 2020
  10. Aaron44126

    Aaron44126 Notebook Prophet

    Reputations:
    874
    Messages:
    5,548
    Likes Received:
    2,058
    Trophy Points:
    331
    I always get RAID 0 and RAID 1 confused... (I wonder what brilliant person came up with this nomenclature.)
    Anyway, yes you can do some interesting setups with Windows's "dynamic disks" feature, but I tend to avoid that as it will not work with third-party mirroring/backup tools like Acronis True Image.

    Regarding bandwidth... 32 Gbps is right, that is just the bandwidth for four lanes of PCIe 3.0. The speed is shared between all of the NVMe slots but it is not divided among the slots. If only one disk is active, it can use the whole 32 Gbps. If two or more disks are active then bandwidth will be cut. The NVMe drives have four lanes each but they are connected to the PCH, not directly to the CPU, and there are only four PCIe lanes between the PCH and the CPU. This is the case for 7740, 7750, and all past 7000-series Precision systems.
     
    Last edited: Jul 21, 2020
    alaskajoel likes this.
Loading...

Share This Page