so i've noticed an interesting affect of latency on my 2 egpu setups. my first one is a macbook air with the sonnet expresscard thunderbolt setup to a gtx 560. the other one is the same card hooked up to the mpcie port on my hp envy 15 1050nr.
now, the cabling for the envy 15 is much shorter, and so are the latencies for pcie transactions compared to the macbook air setup. the mba setup has more throughput because i can run an optimus setup.
on paper, the mba setup should wipe the floor with the envy 15 setup. but this is not the case.
in some of the 3dmark06 tests, the mba completely trounces the envy15. but when i run skyrim with full shadows on high/ultra, the envy 15 will overtake the mba with shadows enabled to high quality. if i disable shadows, then the mba will outperform the envy15.
on diablo3, if i enable high quality shadows, something similar happens. the envy15 will have no problem with hq shadows, but on the mba, it causes stuttering. if i turn down shadows, the mba will outperform the envy15.
the envy15 doesn't have a SB based cpu, so i am not running an optimus setup.
my theory here is that latency is a huge factor when games need to offload graphics processing to the cpu, such as shadows, or particle effects.
i'm waiting on mlogic's tbolt setup to become available, or possibly looking into using the sonnet expresscard pro to enable pcie 2.0 speeds. but i'm also looking in reducing overall tbolt cable length and expresscard cable length anywhere i can so that latency is minimized to the thunderbolt phy where pcie gets serialized.
-
-
the mba has a turbo frequency of 2.9ghz, 2.5 if both cores are active. the envy15 has turbo frequency of 2.8ghz. given that one is a clarksdale and the other is sandy bridge plus faster clock, you would think the mba would be faster in all terms. but it's not, which is sort of contradictory to what you said.
-
MBA uses an ULV CPU, its a 25-40% hit on the actual CPU performance in raw crunching. the MBP, and Envy use a full volt CPU,
-
Karamazovmm Overthinking? Always!
I dont know why you are surprised. You are still limited to the expresscard bandwidth, and given the slower cpu in the mba, its only logical that it should be slower.
-
i think you are misinterpreting what i'm saying.
with shadows disabled in both skyrim and diablo3, the mba is WAY faster than the envy 15.
with shadows turned all the way up, the mba stutters, but the envy15 maintains a higher average performance, with NO stuttering.
both have i7 cpus, and in either case, neither are going to be the bottleneck here. it's going to be either bandwidth or latency. since the mba has higher bandwidth due to optimus, the only other thing left is latency. -
higher latency is very normal considering you are using an expresscard adapter on the macbook pro. thunderbolt itself has very low latency, proven fact by other thunderbolt solutions.
gpu > egpu pci-e adapter > expresscard > thunderbolt >macbook
it's very different from
gpu > egpu thunderbolt > notebook
it's a pci-e link directly, just like external pci-e. there's almost no latency. -
it's just interesting and an unexpected result. i'm waiting on mlogic to release their thunderbolt enclosure to see what the results are. i'm guessing latency will be a similar issue here as well, but i'm hoping the bandwidth gain will offset it.
they need to release it already. -
Not interesting at all because it's really expecting. It's obvious that the latency would be higher. you are using adapters, what were thiking? magic?
thunderbolt has no bottlenecks. get a thunderbolt to thunderbolt connection and check it out. even current solutions like HDD and other implementations proves it.
regarding mlogic enclosure it's the same thing. if you are using a thunderbolt expresscard adapter results will be crappy. -
how is this expected? you're telling me you completely expected a thunderbolt interface that implements optimus compression over gen 1 pcie (1x) to be slower than a normal gen 1 pcie interface without optimus, and only conditionally? even after all the data that has been compiled points to anything implementing optimus to be faster? really?
any end device on a thunderbolt chain is going to use pcie as a breakout layer for data transport. so those hard disks out there are more than likely using some type of pcie based disk controller that sits downstream of the thunderbolt serdes.
currently, there is no such thing as thunderbolt to thunderbolt when connecting an end device, such as a gpu. thunderbolt is a passthrough. you're are going to have some kind of breakout to pcie on the other end no matter what with an egpu setup. same with hard disks, or whatever 1st gen end device you connect to the thunderbolt chain.
unless someone has implemented a gpu card with a thunderbolt phy layer, which nobody has, then what you're talking about makes no sense at all.
maybe this will change with future devices, but right now there is no such thing as a native thunderbolt end-device. the only people making asics based on the thunderbolt protocol is intel. ask me how i know. -
try this, micro sd to mini sd to full size sd to cf to expresscard to usb. then measrure the latency and speed and compare it to a simple microsd reader.
you will see that is very expected. -
exactly, the more steps in the chain the slower it gets.
-
Does this mean that the MSI GUS and the other converter being developed (forgot it's name) will have same issues or are those directly converting the GPU's output to thunderbolt?
I'm eager to see the new mba's that'll be announced today as I may buy one if an eGPU setup with one of the mentioned devices can work on it (I plan to erase OSX asap and install Win7 if I get one). -
Obviously Optimus is always a viable solution for nVidia cards and Virtu Drivers could works for AMD ones. -
So if I get a macbook air and put windows on it, the thunderbolt eGPU solutions are supposed to work on it as on a PC (Same high bandwidth, etc)? Also if I used an external monitor, that wouldn't be affected by the driver issue with the video signal, would it?
-
-
Great, thanks for the help (+rep).
-
don't pop the bottle just yet. we don't know fore sure how it works.
I bet not all thunderbolt chips perform in the same way or even if they are compatible with an egpu solution.
On a mac if these problems are not an issue, then yes, it will work.
But let's not forget the MSI GUS II demonstration was made with a macbook. -
and the CES demo was an Acer
http://forum.notebookreview.com/e-g...i-external-thunderbolt-gpu-2.html#post8581351 -
DSL3510 and DSL3310
But probably on notebook the most part will be DSL3310 due the lower power consumption. Anyway even the lower controller leads about 4 time the performance that a 1.2Opt setup can deliver. Some testing was done with eGPU so i'm quite confident that the system can only improve (first of all with plug and play that Intel promised in future release drivers like the Mac counterpart. -
User Retired 2 Notebook Nobel Laureate NBR Reviewer
An important point. x1.2Opt is approx the same as x2 2.0, so Thunderbolt is only twice faster. Smart notebook manufacturers (are you listening?) could also just give us a x1.3Opt capable expresscard slot by routing from the Series-7 Northbridge rather than the South bridge. Not usually the case but doable as an alternative to Thunderbolt for eGPU purposes without the licensing headaches associated with Thunderbolt chips.
x1.3Opt would be approx same performance as the x4 2.0 Thunderbolt. BPlus have confirmed their PE4L 2.1 can run at pci-e 3.0 link speed.
REF: Intel's Series-7 chipset has a x4 3.0 capable northbridge -
This is a bit off topic, but what do you think, could a ULV i7 IB CPU be a bottleneck in an eGPU setup? (With a higher end GPU, like a GTX 670)
Edit: Judging from the MSI GX60's results (AMD Trinity CPU with 7970m) it shouldn't be that bad I guess, even if it would bottleneck, as the A10-4600M is at around i3 SB performance levels and it's not terrible there... -
User Retired 2 Notebook Nobel Laureate NBR Reviewer
If using a x1.2Opt expresscard eGPU then SimoxTav has a good example to draw performance data from. His i5-2520M 2.5 (Turbo=3.2Ghz) and a i7-2630QM (Turbo=2.7Ghz) saw the former benchmark better due to the higher turbo boost. x1.2Opt's pci-e compression benefitting from a faster CPU.
A Thunderbolt eGPU won't do end-to-end pcie compression so a faster CPU will not see the same benefits. There the only concern will be the how much computing power the game requires ('cpu bound'). -
-
The notebook it's an Acer S5 similar to a macbook air. I hate it.
Actually this is something to worry about because DSL3310 has half the bandwith of the DSL3510.
"For Ivy Bridge we know for certain that Intel will be offering two different solutions which we have reported about multiple times in the past, namely the DSL3310 which is a 12x12mm chip which offers two lanes worth of PCI Express bandwidth and draws 2.1W as well as the DSL3510 which offers four PCI Express lanes and draws 2.8W. The DSL3510 can also be used for daisy chainable devices and as such it would be a lower cost, smaller and more power efficient alternative to the original Light Ridge or CV82524 chipset.
Another aspect that makes the DSL3510 interesting is that it supports multiple internal DisplayPort inputs. What this means is that it could in theory interface with a discrete graphics card as well as the integrated graphics from an Intel CPU. This is likely to be the chip used by Apple in its desktop systems, whereas the more power efficient DSL3310 will end up in notebook products."
http://vr-zone.com/articles/intel-f...trollers-just-in-time-for-new-macs/15539.html
And what chip msi gus II uses? because I saw this:
-
thunderbolt vs. 1x, and latency
Discussion in 'e-GPU (External Graphics) Discussion' started by borealiss, Jun 5, 2012.