^ I know.. but it would be better then nothing. It would be a PCI-E 1x 2.0 against a 4X 2.0 of ThunderBolt.. In the first case we would have loss goin from 15% to 60% (referring to scaling in desktop setup) in respect to 16X.. with ThunderBolt or Light Peak we would have loss going from 5% to 30%. But it's still better for those who cannot achieve 1x 2.0 but only 1.0 (loss arriving to 80% in worst case scenario/applications)
-
-
Over at Slashdot they are having a discussion on TB external GPU setups and their feasibility/use. I figured I'd chime in over there with a link to our thread in hopes of some new traffic and interest.
-
User Retired 2 Notebook Nobel Laureate NBR Reviewer
requested a external video graphics devices section on NBR
I've asked Charles, the head moderator, to create a new section on NBR along the lines of "external video graphics devices". Then could have a specific discussion like TB development, individual's implementations, a troubleshooting guide, etc, instead of trying to encompass it all in one thread.
I hope he agrees. Would make positions available for summary page editors of some sticky threads too. -
Karamazovmm Overthinking? Always!
nope pcie standard is bidirectional.
a pcie x1 1.0 has at least 2 + 2gbps bidrectional chanels.
a pcie x1 2.0 has at least 4+4gpbs channels
its still too slow.
What I saw from benchmarks using x4 slots the performance difference ranged from 0-50%, being the latter number only felt on direct ports, like COD, the former being the standard. There was only a handfull games that went for a 10% decrease, the range from 11-49% was completely empty on the games tested
That was one of the reasons that I bought a mbp 13
EDIT: I forgot that when you have a x16 lane layout, the pcie lane is 2x higher, thus when you get a pcie x1 2.0 its indeed 4 + 4gbps bidirectional channels. Thus all the numbers are in a config of 1 per lane -
You're right... so Thunderbolt would be only a 2X 2.0 (8+8-- near 20)?
-
Hi, after many of reading I finally got my pe4h 2.4a kit and a MSI 460gtx 1G cyclone today. and started to DIY my egpu.
my spec is: xps m1330 | T7300 | 4gb ram | 8400gs | win 7 pro |
what I did:
1- I plugged everything then first powered on PSU and video Card
2- turned my notebook on. win 7 detected a new display adapter, so I installed the driver from the driver cd (ver. 266.58)
3- rebooted and in Device Manager shows not error (says 8400gs and 460 gtx works correctly)
I stuck there and don't know what to do.
1- how do I switch to make notebook use 460gtx instead of 8400gs in my internal display
found in pre-purchase FAQ---> almost not possible without optimous
2- I tried to plug an external display and nothing happens.(can't detect) how to use external display?
3- I read this FAQ and this so I uninstalled my 8400gs's driver (275.33) then I tried to force it to use the driver come from the cd (266.58) and windows says cannot find a suitable driver.
can someone tell me am I missing or what's next?
PS: which version of ExpressCard comes with my m1330? 1.0?
thanks -
User Retired 2 Notebook Nobel Laureate NBR Reviewer
There are XPS1330 examples on the first page so we know others have got it to work.
The FAQ item you note asks yoi to to load the NVidia Verde driver + modded nvam.inf from here which will provide a unified driver for both the 8400GS and GTX460. Internal LCD won't work on your system since it's not Optimus capable so ensure an external LCD is attached to your GTX460.
The XPS1330 has an expresscard 1.0 slot. You'd need the latest Sandy Bridge systems to get a expresscard 2.0 slot. -
Karamazovmm Overthinking? Always!
since the cable (or the adapted connector) only supports 2 channels, it doesnt matter if you buy the mba (eagle ridge, 2 10gpbs bidirectional channels) or the mbp (light ridge, 4 10gbps bidirectional channels)
Only if we had a cable (or the adapted connector) that supported all the 4 channels it would really matter.
since the mdp can have a bandwidth of:
1.62, 2.7, or 5.4 Gbit/s data rate per lane; 1, 2, or 4 lanes; (effective total 5.184, 8.64, or 17.28 Gbit/s for 4-lane link); 1 Mbit/s or 720 Mbit/s for the auxiliary channel.
So you see its either the adapted connector or the cable that wasnt done to provide the entire bandwidth that the controller can provide.
You do realize that Im hoping so much that Im wrong on all these accounts -
That's not what I mean.
At the moment, TB cable supports 10 Gb/s bidir (or less since mdp tb cable seems slower), and it's claimed to be 4X PCI-E .. but 2.0 it's 16 Gb/s bidir while 1.0 it's 8 GB/s bidir. That's my doubt... It's not a real PCI-E 4x 2.0 but a 1.0 or 2X 2.0 with actual cable technology.. -
Wait... what is this??? O_O http://download.intel.com/pressroom...itePaper.pdf?iid=pr_smrelease_vPro_materials2 What would mean with Sony's implementation?
-
Karamazovmm Overthinking? Always!
thats what I tried to explain, its not the controller, indeed the light ridge controller can do more than pcie x4, since the controller can do 4 x10 = 40gpbs bidirectional channels, and the pcie x4 is 4 X8 = 32gbps. However since the southbridge only dedicates a pcie x4 for the thunderbolt controller its indeed pcie x4.
However the limitation is not on the controller, its on the cable or the adapted mdp connector, since it can only direct 2 thunderbolt channels per port.
as I said I really hope Im wrong on this one, since intel hasnt released white pappers regarding thunderbolt yet (or not one that I could find), if you do have a link please share
thats the whitepaper for the future of thunderbolt, or rather a working prototype that they had, it caused a stir when it was launched last year. However the thunderbolt needs to pass electricity for it to be that useful, thus there isnt anymore comment regarding this, after this they had a showcase demonstrating lightpeak, and guess what was the connector interface? USB -
Hi, I've done some reading through the forums (including the entire 1st post of this thread) and did a search, but I couldn't find the answers I seek, so I ask -
Referring to Nando4's first post, how do the Lucidlogix virtu drivers work ? Are there any hardware limitations to this ? Has anyone here tried using them ? How do they compare to Nvidia Optimus as an internal LCD solution ?
Reason I'm asking is because I'm planning to buy a new laptop, and I want to know if it is worth the trouble to limit my choices to those with Intel 4500MHD/HD/HD3000 iGPUs for the Optimus solution.
Thanks in advance !
-
Sorry for the misunderstanding
So, since TB has a cable claimed to go to 17 Gb/s, we'll have a PCI-E 2X 2.0 compliant set. And for MDP which has only 2 lines, we'll have a PCI-E 1X 2.0 compliant set.
this is what we have in Z21 http://weekly.ascii.jp/elem/000/000/048/48360/110706gian_vaio003_1000x.jpg it's seems exactely that technology... So we'd have 50 Gb/s per channel (and there you can see two channels, so 100 Gb/s.... or better, UP TO 100). What I don't understand is.. in a solution like this, there's still the need of a thunderbolt controller??? I'm not sure of this.. cuz as it was the technology, any kind of electrical signal could have been converted to infrared signal; so I think that it's possible that the transmitter die it's directely connected to PCI-E bus. -
OK
Anyone else can answer me if Is the seller at the link below:
PE4H + EC2C ExpressCard to PCI-E Adapter V2 stock 10pcs | eBay
Reliable? -
User Retired 2 Notebook Nobel Laureate NBR Reviewer
I do not have a SB notebook so I haven't used Viru but have gathered his from other comments on here:
1) unlike the Optimus driver, no pci-e compression is engaged so expect a significant decrease in FPS when running external LCD compared to an external LCD
2) it requires a SB notebook to run
3) it's not free
Why must you limit yourself to HD3000 iGPUs only? Any existing Optimus dGPU system can be made to work with an eGPU instead as explained. Just means booting up via DIY eGPU Setup 1.x instead.
Redtrontech is bplus' Taiwanese distributor. See www.HWtools.net ?????????? . -
That's reassuring, explains the fast shipping too! Had mine within a few days.
-
Karamazovmm Overthinking? Always!
no its pcie 2x 2.0 all around. as I said, the light ridge and the eagle ridge would give the same performance for an egpu, since it would only use 2 channels.
I hardly believe that what sony did will drive up to 50gpbs bidirectional per channel.
It would only be possible if the entire pcie was dedicated to the pmd, which currently with the lower performance I find it hard to believe, the more probable thing is that it would be the same as apple, pcie x2 2.0 -
Sorry, another time my mistake, I confused lanes with channels.
I don't mean that it will go with that bandwidth, I only say that technology is capable of 50 Gb/s per optical fiber cable.
In this case, as Nando4 says, it may be that one fiber is for DP and the other is connected to PCI-E. Since there's no dGpu in the Laptop, having available a 50 Gb/s they probably set PCI-E bus at least at 4X 2.0 (32 Gb/s) leaving uncovered another 8X+4X. They could have also set PCI-E bus as 8X-8X which would mean 64 Gb/s channel, more then enough to use the 50 Gb/s of that Silicon Photonics Link. Or, since there's no other dGPU they coul have set PCI-E bus directely as 16X. It wouldn't change nothing anyway.
It's probably at least a 4X since performance difference between 6650M in PMD and 6630M in Sony SA (which is @ 16X 2.0) goes in that way:
6650M is 10% faster then 6630M; in some test seen in a japanese review, PMD goes 24.86 and if it's a real 4X bandwidth, we'd have a loss of 20% (avarage through some review in scaling performance)..so, if it would @ 16X we would have 31.075. SA/6630M goes 28.32 and since 6650M it's 10% faster -> 6650M would have to get 31.152. Results are quite similar, so it may be a real 4X. -
Karamazovmm Overthinking? Always!
^^yep all that would make sense.
Since 2 channels are dedicated to the wi fi and the LTE modem
There would be available 14 channels, and since there is a lower performance of teh 6650m compared to what we would get if it wouldnt be outside, its clear to me that there is some channels limiting things -
Uhm, if I'm not wrong, WiFi and LTE modem (which I don't have anyway
) are connected to 2 of the eight PCI-E ports available from the Northbridge. In fact, SA which has the same platform, dedicates al PCI-E bus from CPU to the 6630M (it goes @16X 2.0 confirmed by GPU-Z)
I think that they set the same in Z. So SA grants a 128 Gb/s bus (16X) to 6630M while Z21 grants from 32 Gb/s to 50 Gb/s as maximum (with PCI-E 8x or 16x). Does it make sense? -
Folks, I need some assistance please.
I want to replace my XPS M1530 with an HP ProBook 4530s.
Currently I use a PowerColor PCS+ HD5770 1GB GDDR5 running at x1E speeds.
Since I can't use the x1E trick with a SB system, how much of a change in performance will I see?
I believe that the fact that the SB's ExpressPort supports 2.0 speeds won't come into play and I'll get x1 performance - is that right?
Please shed some light on this for me.
Thanks!
-
Karamazovmm Overthinking? Always!
it makes and if it is indeed right, Im completely wrong
-
User Retired 2 Notebook Nobel Laureate NBR Reviewer
Good choice with the 4530s. Am keen to see someone post an experience with that low cost but attractive system. One thing I read in a buyers forum is HP apparently removed USB 3.0 from systems after a certain build date (March?) so could try to negotiate to get the USB 3.0 model.
A HD5770@x1 2.0 would be faster than x1E but we don't have compliant hardware to run that speed.. we are still at x1 1.0 pending further testing by bplus.
I'd suggest ebaying your HD5770 and getting a GTS450 or GTX460 so can run a x1 1.0 Optimus setup. Optimus does pci-e compression and can transparently drive the internal LCD. The changeover costs would be minor if any at all and well worth it for the benefits it brings. -
well, I'm quite sure that wifi and LTE are on northbridge, that's the normal usage
For the rest.. we'll see!!! And I hope it XD
In part we'll know about it if we discover that it's muxless. If it's muxless it would be fool to set 4X or less when 8X and 16X are faster (and useful in this case) and there's no other peripheric using the bus. If there would be some additional review... I saw laptopmag results:
In Crysis @ 1024x768 PMD it's 6% slower (in native res I don't look in this case cuz Z21 in test was 1080p and not 900p
)
In WoW autodetect it's 24% slower
In WoW native res it's 18% slower (still.. resolution is higher in Z21 but it's strange to see a smaller difference then in autodetect .. I think that in autodetect setting, the setting for Z21 was heavier for some reason)
In 3DMark 06 it'even 8% FASTER.
It must be at least 4X, maybe even more... BUT we know that Sony............ XD -
Thanks for the quick reply Nando4.
Yes, I'm aware that the USB 3 has been removed
But I can't justify buying a Vostro 3540 with 2GB of ram for $524 (after shipping & tax) just to get a USB 3 port.
The Lenovo E520 is a few bucks cheaper, but it looks like a giant block of plastic (and doesn't have USB 3 as well) so the ProBook looks like the way to go right now.
Speaking of speeds, has anyone seen this?:
All Gigabyte 6-series mobos to support Ivy Bridge, PCIe 3.0 -
User Retired 2 Notebook Nobel Laureate NBR Reviewer
I hear you on the E520 looks. My only qualm about the 4530s is the low 48Whr 6-cell battery and no USB 3.0. A Dell Vostro 3450/3550 offers the same cons but can get a 14" within the same price range, and they have a meaty 90Whr 9-cell (+88% more battery life) accessory (for a cool $180) and a backlit keyboard option. If interested in the Vostro then please check to make sure TOLUD is set to 3.0GB or 3.25GB as described in http://forum.notebookreview.com/del...ostro-latitude-egpu-friendly.html#post7633189 by asking folks in the Dell forum area for screenshots before you jump in or else you'd need to remove RAM to make it work. The E520 is in the same boat.. 48Whr 6-cell, no USB 3.0. Hmmm - pretty much all have the same cons.
Here's a current list of cheapest systems I could find. slickdeals.net and logicbuy.com being the two sites I visit regularly to find deals.
Cheapest US #1 $450-shipped 15.6" HP 4530s i3-2310M 2.1 4GB 320GB W7HP *expresscard slot* per HP's specs
Cheapest US #2 $469-shipped 15.6" Lenovo E520 i3-2310M 2.1 4GB 320GB W7P *expresscard slot*
Cheapest US #3 $469/$519 15-14"/13" Dell Vostro 3x50 i3-2310M 2.1 2GB 250GB W7HP *expresscard slot*
Cheapest US #4 $540-shipped 14" HP 4430s i3-2310M 2.1 4GB 320GB W7HP *expresscard slot* per HP's specs -
Oh boy! I didn't even notice the TOLUD problem with the Vostro.
In that case, the ProBook is the only way to go.
I plan on selling it around Dec/Jan anyway, so there's no point in spending money on an i5 at the moment.
I don't mind having a small battery; my M1530 has a 9-cell and it weights a ton.
If the 6-cell will last for 2 hours than I'm good.
Hopefully tomorrow I'll order it from Amazon (due to their return policy which is far superior to other stores, even Newegg).
Thanks again
-
Karamazovmm Overthinking? Always!
you also have to remember that this type of card is not bandwidth limited.
I will take a look at the white pappers that I have about sandy bridge here, but from what I see its the southbridge that was left to control the pcie lanes.
and from what I see from the reviews about thunderbolt Im quite sure its the southbridge -
User Retired 2 Notebook Nobel Laureate NBR Reviewer
Which pci-e bus are the TB/LP and expresscard/mPCIe connected to?
A expresscard/mPCIe link and a TB/LP attach on different pci-e buses as shown in the above image (it's Series-5, but pretty is pretty much the same for the Series-6 Sony Z2/MBA/MBP).
A TB/LP hangs off the CPU where it can have up to a x16 link. A expresscard/mPCIe slot hangs off the PCH/Soutbridge which is configured as x1, but in some specific cases can be ganged to x2 and x4. -
Karamazovmm Overthinking? Always!
^^always the helpful man + rep
EDIT: I will rep you when I can give it again -
Don't agree.
In DIY thread you can see that a 4350 with performance loss around 20% with 1X 2.0 and around 5% with 8X 2.0 (a far far far slower card then 6650M)
You can also see a HD4870-1GB with similar losses in the same game @ 1X 2.0 or losses around 50% for HD3850-256MB (in this case, the problem is the memory on board which is too low).
So I think that on 6630M on PMD there's a not small loss of performance, due to 4X bus against 16X bus. -
Karamazovmm Overthinking? Always!
what game is that? Ports are really bad for egpus, specially the COD series
-
^ Crysis 1 in that case. Also 3DMark. There's always a loss of performance.
EDIT: also 8600 GT and 7300 GT show losses, sometimes very big (60% in Doom 3 for 7300 GT) -
Karamazovmm Overthinking? Always!
3dmark is quite useless, you dont know how I dislike passionately the benchmark suits that we have around.
crysis 1 was always bandwidth limited -
^ sure.
But when you see a 7300GT goin 15-30% slower in Half Life 2 55-75% slower in Doom3 30-45% slower in COD2 and a 8600GT goin slower 20% in street fighter and last remanant ... it means that the bus is always a problem.
( Crysis is bandwidth eater but.. we talk about memory bandwidth, not pci-e bus as it seems) -
Okay, it looks like I'm really close after downgrading to the 1.11 bios. That eliminated the error code 12 issue, so I may not need Setup 1.x. Two issues I'm facing now:
1) When I try the sleep>power eGPU>wake sequence, I end up getting a BSOD and the computer needs to restart. This happens whether I try to power-on or power-off the eGPU after Windows boots.
2) If I just let the system boot up completely into Windows with the eGPU on, I can actually see a little success in that the login prompt appears on my external monitor (connected to the GTX460). However, as soon as I log in the internal screen becomes the active display.
Using the modified driver for Optimus (though apparently not intended to fix this particular problem) had no effect - tried it just in case. -
User Retired 2 Notebook Nobel Laureate NBR Reviewer
We've had similar BSOD/external LCD disconnecting issues before that were traced to an underpowered PSU or incorrect power cabling. I'd advise you double check you connectors. If not sure what to loook for then post a photo of the specification sticker of your PSU showing the 12V rails and a closeup showing the leads running from the PSU to the PE4H and GTX460. -
Okay that sounds like an easier fix (even if I need to get a different PSU).
I've got a Corsair CX430 which is mentioned in the suggested nVidia setups. You advised me in the other DIY Vidock thread that I should use the two 6-pin pci-e cables, but there is only one on my model. The connectors are:
x1 ATX 20-pin & 24-pin compatible
x1 EPS/ATX12V 4-pin & 8-pin compatible
x1 PCI-E 6-pin & 8-pin compatible
x4 SATA
x3 4-pin peripheral
x1 floppy
I've got all three 4-pin peripheral connectors and the one floppy connector on one cable, so I've probably not getting enough current from just that one cable to power one pci-e slot on the card and the 4-pin slot on the PE
4DH. -
User Retired 2 Notebook Nobel Laureate NBR Reviewer
Okay then. The reason it's not working is because the second 6pin pci-e connector isn't powered. To do that you'll need a $0.25 molex to pci-e adapter like shown below so can attach it to two of your PSU's molex plugs.. Check to see if this adapter is in your GTX460's box. There probably is.
-
Dear Nando4,
I'm quite far from being an expert and I know someone already asked about the possibility to use USB 3.0 instead of express card and that was not good as USB doesn't have a pci link (?), is that right? What about using an express card-USB 3.0 adapter? In that case we would use the express card slot and link that to the PE4L/PE4H (I guess with a modified version) via USB 3.0. Also, there are usually two USB 3.0 ports on these adapters, so couldn't we increase bandwidth by using 2 cables?
Sorry if that is too stupid...
David -
User Retired 2 Notebook Nobel Laureate NBR Reviewer
It will be November when the pci-e to USB 3.0 chips are available to experiment with. Can't speculate further until the specifications and drivers of those chips are evaluated for function with desktop video cards.
If that's too long then consider going an expresscard/mPCIe Series-6 chipset to do x1.1Opt now and x1.2Opt once we have pci-e 2.0 certified gear from bplus. -
Hi, I successfully installed the modded driver 258.66. in nVidia controll panel I can see both 8400gs with internal display and 460 GTX with my external display, but whenever I try to switch/disable display (using Fn+F8, win key + p, using nvidia CP) I get BSOD, any idea?
thanks -
What exactly does that mean? Existing USB 3.0 ports on notebooks would not be possible to use at all?
-
^ it means it's not sure @ 100%.
Theoretically, since USB 3.0 is connected to one of the 8 PCI-E ports from southbridge, it's possible. But the problem is to do this practically, and without hardware that will be released on November, we have to wait. -
Hm....it's only recently I became aware of external GPUs.
I'm thinking about returning my top-specced gaming laptop and getting a Macbook Air or Pro with Thunderbolt, and an external monitor.
There wouldn't be any latency issues with such a solution, right? -
^ Reduced latencies then actual eGPU solution with Express card. But always more then a direct connection between eGPU and PCI-E bus. In addition, take into account that with thunderbolt, since actual TB cable, you would achieve a 2X 2.0 compliant solution, not more.
-
Yes I did, but the problems remain
-
What kind of latency would you estimate Thunderbolt adds (in ms)?
-
Hi,
i finally ordered
-PE4H incl. EC2C + SWEX
-PM3N
-2x HDMI Mini Cable 1.5M from DX
-passive cooled Radeon HD 6670(max. 66W) which hopefully doesn't need an additional PCIe-plug.
Can i use an old 19V-90W-notebook-psu to run this or is this limited to 36W?
If this were the case which kind of additional psu do i need to run this without noise? Or should i get a Radeon HD 6450 instead(i need it to run 2560x1440)?
I don't really get the meaning of the following table on the hwtools website:
PE4H ver2.4 12V/5V/3.3V power current limit
DC15~20V input Floppy 12V/5V input ExCard 3.3V input
PCIe 12V 12V/3A (Max) 12V/6A (Max) 12V/0A
USB 5V 5V/1.5A (Max) 5V/1.5A (Max) 5V/1A (Max)
PCIe 3.3V 3.3V/3A (Max) 3.3V/3A (Max) 3.3V/1.3A (Max)
thanks
EDIT:
Should i get an adjustable 150W-notebook-psu which can deliver 8.5A at 12V? -
User Retired 2 Notebook Nobel Laureate NBR Reviewer
techpower's testing of a HD6670 finding it needed 58W peak here. Officially, bplus rate the 19V->12V regulation at 3A, or 36W. I do know that users in this thread have previously ran higher power through it. Saying that, it would be at your risk because I do also know some PE4H ended up with blown 19V->12V regulator circuits. Not really a problem if it did as you could always revert to using the 12V floppy jack instead with an ATX PSU or an (adapted) 12V adapter instead
A HD6450 is the safer and cheaper option if you do not need higher performance a HD6670 offers.
Since there are no Thunderbolt peripherals, including eGPUs, available there is no way of measuring latency the TB controller introduces. I would assume it's negligble since the TB controller pair is really just doing mux/demux of pci-e and displayport traffic.
DIY eGPU experiences
Discussion in 'e-GPU (External Graphics) Discussion' started by master blaster, Sep 18, 2009.