It does not work.
It's sht. Bad sht.
Consider the following: I'm using advanced software that needs GPU processing. But Optimus does not detect that it needs GPU because it doesn't have the right flags. So I have to either change the name of the executable, or manually change settings every time it runs.
Don't forget about the added latency in presenting, the limited power coming from GPU, and the "Oh make it stop!" problems with GPU stopping working at random times.
Ever try to play Hyperdimension Neptunia on an optimus laptop? Fat luck there. It'll tear like fat cheese. There is literally an entire triangle that presents with a delay of almost 0.5 seconds, that splits the screen. The game almost runs on integrated GPU for all that I know, it just needs a little extra power to look good. Well, nVidia won't let you do that.
I filed over 20 complaints. 20 to Acer, 20 to nvidia, to let me disable that. Just on my machine. You know what they answered? They told me they know better that I don't need it. They told me it's never gonna happen. This is why I avoid Acer now. I wish I liked them,....
If I could boycott from buying nvidia I totally would.
I know that for some it just works, but what happens to those developing? They simply don't care. Or what happens to certain games? Especially visual novels and anime related games? Nope.. Nvidia ain't gonna care that this type of gamers spend almost the biggest money on their games. Nope. Not. They simply insist of adding a feature nobody needs.
What if you want to use photoshop or another software with nvidia GPU, to actually have a decent render time, or to not have delay every time you press a brush... Well,... Optimus ain't gonna let you do that.
I only wish NotebookCheck were fair on their reviews and mentioned this before finishing the review. I bought a laptop to develop games and software, found out that this is how it works after two months of trying drivers and going through reinstalling windows 14 times because I thought I was wrong in some place...
It may prolong battery life, but an Acer V7 + 860m can get between 3h or 1 h of battery life. It's not an ultra book, no change of keeping it in lap. It's a 17" monster laptop, that is heavy. No one needs battery life, or more than an hour of battery life on this type of machine.
-
Don't forget the delay in presenting the frame.
The time it takes to transfer the frame data to the iGPU is added to the presenting delay. One can NEVER get smooth 60 fps with optimus. Even if GPU processes it, the presenting+processing time is too high.
Tested this with the simplest of tests. a freaking video. I played that video with forced processing through nvidia GPU and through intel GPU. It looks choppy and jumps through nvidia GPU, and plays smoothly through iGPU.
I have only bad things to say about Optimus, but mainly because on a xx60 machine it's useless. I can see it's uses on lower power machines, am not stupid about it. But it really broke my experience and no one cared. Nvidia was awful in responding to my requests, Acer was even worse... I think it's mostly Acer's job to change Bios. But forget about that, the Bios is locked more tightly than a bank safe. Impossible to change any simple thing.
I can understand nvidia not being able to do something about it, but Acer were... never again...
I called them. Told them the problem. Answered like: If you think it's a problem, bring the laptop to us. told them I know it's not a problem, but I can't use it like that. Answer: We don't care. If it works, there ain't nothing we can do. And I was like: You make this laptop and sell it. What do you mean you can't change it? but the discussion was a nope. Maybe they added an option to disable iGPU on Predator line. That would be a nice change for their products. -
Robbo99999 Notebook Prophet
Yeah, I can imagine that there would be extra latency created by sending frames from GPU through iGPU. I think in your paragraph #3 you got the NVidia GPU and iGPU mixed up (prob a typo).Georgel likes this. -
-
PrimeTimeAction Notebook Evangelist
-
-
PrimeTimeAction Notebook Evangelist
It could be lan parties or something work related.
I know of people traveling with mini itx cases with them. Yes its absurd but if it works then why not!Georgel likes this. -
Because one computer costs about 2000$-5000$ and if I take a vacation of one week but still want to work / do it for fun / can't work with lower power..
I get ~ 1 minute to render a Mandelbrot fractal. Not overly productive without a stronger GPU. -
Ok ok I am convinced optimus is bad. What's the best way to tell which laptops have it? Specifically looking at clevo p650rg or its pascal successor in a few months. Also if optimus is just software can it be turned off?
Prototime likes this. -
It's the design of the vBios that permits it being turned on or off.
Very few factory laptops permit disabling Optimus. But a new vBios generally helps.
The best way, and only way to be sure is to ask people who already own it, ask in the thread of owners how to deal with it.
with the performance, 1060 laptops will be quite good, but don't let optimus break that good for you. With 860m, it induces enough latency in movies / anime to be easily noticeable. -
You gotta look it up or ask in the owners thread of that laptop.
From my own experience: I code with the Oculus SDK and bought a Clevo P651SE because XMG claimed it would be compatible with Oculus.
Turned out its not because of Optimus and I had to sell the Laptop at a loss and bought my current Clevo P751ZM without this Optimus crap.
I learned that the old Clevo P651SE can never disable Optimus because the nvidia card was physically not connected to anything, but the iGPU.
But Clevo listened (a company that listens !) and in the Skylake refresh of Clevo P651SE (forgot the name of the new one) they added a bios option to disable Optimus by having the Nvidia card wired to one connection and the internal display I believe (dont quote me on that, been some time). Obviously turning Optimus on and off required a reboot but its still a lot better and should allow VR functioning properly.TomJGX, Georgel, Prototime and 1 other person like this. -
Robbo99999 Notebook Prophet
Most laptops have Optimus I think. Laptops with screens advertised faster than 60Hz will be Optimus free. SLI laptops (2 GPUs) are Optimus free. My laptop has an option in the BIOS so that I can disable the iGPU and run it directly off the GPU (it's called a MUX switch). I remember at least one or two Asus laptops being Optimus free. Some of the Clevo laptops with desktop processors - Optimus free. I couldn't tell you which are the laptops that have Optimus but are able to disable it through BIOS - that would require some extra research.Georgel likes this. -
No one would? Not everyone can afford to throw down so much money when they jack the prices up. I'm looking at both mid-range and top-range cards, because my wallet is only so big. If a mobile 1060 will cost the same amount as the 980M despite offering a mere 23% improvement, then I shudder to think about how much more money a mobile 1080 will cost, regardless of its performance.Georgel likes this.
-
Then again, for gaming at FHD a 980m / 1060 laptops will be enough to max out everything in almost all games!Kade Storm, Ionising_Radiation and Prototime like this.
-
Yep! And I'd say that's good enough performance to live with for quite a while.Kade Storm and Georgel like this.
-
All this rubbishing Optimus is just making me more impatient to replace my P651SG with a GSYNC enabled GTX 1080M laptop
Sent from my iPad using TapatalkGeorgel likes this. -
I don't have one, but I'd argue that MSI's eGPU solution is the best because it uses a full PCIe x16 connection, so there's no bottlenecking. Of course is uses a proprietary connection that only a few MSI laptops support, so it's not as transferable as the Razer Core which uses Thunderbolt 3 (PCIe x 4), but as others have pointed out using the Razer Core with non-Razer laptops doesn't produce the best results anyway. I'm the least familiar with Alienware's graphics amplifier, which I hear is the best optimized, but it also seems to combine the worst aspects of the MSI and Razer solutions: it uses a PCIe x 4 connection like the Razer Core, but has a proprietary connection (not TB3) like MSI.
So if you're holding out for an eGPU and don't mind BGA laptops, I'd go with the MSI Gaming Dock Mini. There's only two computers out right now that are compatible with it - the GS32 Shadow which came out a few months ago, and the GS30 Shadow that came out last year. I'm not thrilled with either choice because the GS32 has a dual-core processor and the GS30 is older and comes bundled with a larger version of the dock. But MSI has announced plans to make more laptops compatible with the dock in the future (including the 14" GS40 6QE Phantom, which is up on their website but no resellers seem to have it in stock yet), so hopefully by the end of the year there will be better options. -
I will have to keep an eye on these egpu options when Pascal finally is announced for mobile.
On the Alienware website I notice that it states PCIe x16 compatibility (I guess compatibility is different from the actual speeds it supports?). The MSI and Razer products don't appear to on sale in UK yet.
Thanks for the info!
Sent from my iPad using Tapatalk -
That Msi gaming dock is an abomination, I don't get why they cant just make a standard case, no wonder it only works with 13 inch laptops .
Cakefish likes this. -
So, my 1080 FTW core clocks is stuck around 2050mhz. However, the GDDR5x clocks exceptionally well at roughly 11.4ghz.
The card is NOT power limited at all, with the power usage rarely going pass 100% when I maxed out the power target. -
I'm assuming you're referring to their old gaming dock? This one: https://us.msi.com/Laptop/GS30-Shadow-with-Gaming-Dock-Intel-Iris-Pro-5200.html
I agree that one is way too unwieldy. Their new eGPU solution is the Gaming Dock Mini, and it's much more manageable (and is supposed to be compatible with laptops greater than 13"): https://us.msi.com/Laptop/GS32-Shadow-6th-Gen-GTX-950M#hero-overviewAshtrix likes this. -
Darn those docks. Sooo big and expensive and impossible to move, I'd invest in an 870DM with an SLI if I could instead of that.
-
Yeah I get what you saying from that view on it. I was kinda thinking without money in the equation. Because if you want that kinda power. Id of assumed you can afford it. It's not so much that I can't. It just that I can't afford the depreciation more. I think mid range hold their value better
Sent from my iPhone using TapatalkKade Storm, Prototime and Georgel like this. -
If you do plan to purchase an eGPU, you should make sure it is PCI-e 8x or above because Pascal will utilize more than 4 lanes (1GB/s Bandwidth).
At least the higher end GPU's will. -
I actually benchmarked the 1080 on PCIE 1.0 x16(I believe is the same as 3.0 x 4) in firestrike and noticed very little difference.Georgel likes this.
-
SLI is pretty much f***ed up in present times! Stinky or no benchmarks at all and crappy fps in actual games (besides Doom). Not worth the hassle....Ashtrix likes this.
-
Nah I was referring to the new one, the old one is beyond abominable (it looks like a printer), the whole notebook pad idea is just disgusting, just give us an external case, we can connect via wires and hide away, the idea of a dock that big and that fat sitting on my desk for me is an abomination, it is ugly, and cheap looking, specially with the notebook off of it, and it raises the laptop way too much IMO for it to be comfortable, normal egpu cases can look pretty darn good, look at the Asus solution for example, or even Razor core, we just need someone to give us an option for PCIe x16, that isn't a monstrosity .
Last edited: Jul 15, 2016Cakefish likes this. -
Ionising_Radiation ?v = ve*ln(m0/m1)
From an 860M to a 980 in two years within the same form factor. nVidia, my body is ready.
-
When HBM2 comes out, it should be very noticeable. That's basically what I was referring to.
-
If PCIe speeds were meaningless, nvidia wouldn't be pushing for their NVlink at much higher speeds. Though for many games I have seen even PCIe 3.0 x1 being good enough to not be a bottleneck. So who knows?
-
probably for certain pro applications. -
NVlink and PCIe connections serve different purposes
NVlink was designed to eliminate micro stuttering by ensuring that data is transmitted enough fast between the two cards, or card-> computer.
When working in SLI, there is a ton of communication between cards to keep them in sync, that communication creates a lot of overload.
Now, if we take a look at PCI speeds, the whole thing is not about being a true bottle neck, but a relative one. When you get render times of 10 ms or so, every other ms added to the presenting time will have a powerful negative impact on the render time, and lead to choppy playback, or frame drops. The idea is to reduce the render / presenting time as much as possible, the true bottle neck is the GPU core. But what happens when the GPU core is almost fast enough, but something adds just enough delay to result in frame drops?
EDIT ::: This might be helpful to also note, but while we're closing in to displays that can actually display more frames, those minor improvements in presenting time are sure to matter.
We've got a 16.6ms max render + present time for smooth 60fps. Which is not fully possible with Optimus, as, even if it's not detected by hardware, the presenting time is still too high. I.E. the GPU renders the frame, but it arrives late, looks choppy at times.
But what about a 240Hz refresh rate? Or more? Display tech advances.
That means a 4.16 total render + presenting time. We know for sure that presenting time will take time regardless of what we're doing. We can't change that, because it takes time to show all pixels, etc. This means that we must improve every tiny bit of tech we can to decrease render + presenting time until they are well under the max allowed time. To have smooth 60fps on something, total render + presenting time is supposed to be at 10ms. IF it is at 16ms, the frame can be flagged as delayed frame, and depending on the display + GPU + CPU + settings combinations be flagged to be dropped.
EDIT 2 ::: Think about these improvements not like they were solving the bottle necks, which are GPU core and render time, and display refresh rate.
But these improvements help close in to a better performance.Last edited: Jul 16, 2016 -
Uh, pretty sure NVLink is designed for HPC applications. It's not even available on consumer devices in the near future.TomJGX likes this.
-
It should be available by the end of next year, early 2018. Definitely when Volta rolls around.
Georgel likes this. -
It's purpose was to allow for faster communication between GPU and another GPU and CPU.
Edit ::: Haven't refreshed the page before.
As @J.Dre pointed out, NVlink will eventually come to consumer grade products.
I can't wait for them 240 fps monitors and GPUs that can process 8k in 240fps.
(And it will happen. Think of how impossible 4k at 60 fps sounded in 2000, and look at us now.)Last edited: Jul 16, 2016Ionising_Radiation and hmscott like this. -
Wanted to ask, how do Vendors price their laptops? Like do OEMs say to the Vendors, this is how much it costs as a baseline and then the shops add their markup or do they state a set price for their products and then they get commission or something?
-
Games don't require you to constantly shuffle data back and forth between GPUs and CPUs nearly as much as compute. This lets you scale out to multiple GPUs for compute purposes since a) you can reduce the communication costs that plague interdependent parallel computations and b) potentially use RAM to store data for certain computational tasks instead of having to swap in and out all the time. SLI microstutter has absolutely nothing to do with why they implemented NVLink right now. 3.0 x16 is well in excess of the amount of bandwidth relative to what you currently need for SLI operation, and 4.0 is just around the corner. Even at an absurd 8K 240Hz 32bpp, transferring every other frame would only require 14.83GB/s of bandwidth.Ionising_Radiation and hmscott like this.
-
That's ignoring the point.
a single frame at 8k is much larger when it comes to real data frames. 1 frame can reach up to 100mb. 1 gb = 10 frames. 10 gb = 100 frames.
Now, let's say that those 14 GB/s would be enough to send the said frames, if the frames were stored somewhere and just read with the said speed.
The problem is that these frames need to be calculated and need to be send effectively immediately. We need to send that 50-100 MB instantly. In less than 1ms. That is the presenting time. Regardless of how much the technology advances, the presenting time can be a bottle neck as long as the processing time is high enough.
A reliable connection that can present 144 frames per second at 8k needs to be 100GB/s. That is how problems look like. Because this would be true for game data. A movie can be transmitted as the processing step should be short enough to work. And movies are not higher than 60 fps right now.
But movies have processing time too, since they have read time, have data exchange time and frame interpretation time. Games use compression to avoid such problems.
Regardless of the propaganda, once you see where a frame drops and why, it's impossible to unsee this effect. The time data takes to go from GPU to display and be displayed is very high. Or at least, it's higher than the comfort zone for higher frame rates or resolutions right now.
Again, at 4k or FHD and 120 HZ, this should NOT be the bottle neck, so let's not over stress this right now, but for future, we need to create a faster connection.hmscott likes this. -
It's more complicated than that.
But if it works like most businesses, the retailer receives an empty shell for a price. Then, the seller starts adding things, for certain prices.
But there are taxes which vary greatly between sellers, there is the problem that not all sellers have access to all hardware at the same price, and the problem that there are second grade sellers, like a seller that buys from another seller, instead of buying directly from the original company.Last edited: Jul 16, 2016 -
Some interesting realizations and comments by JayZ + benchmarks and review of the Zotac GTX1080 AMP Extreme.
He found his 3 Founders Edition GPU's are the only ones that reach 2100mhz, the others so far haven't gone much past 2050mhz, even with lots more power phases and better cooling - like the Zotac as maxxed out in resources as it is it still artifacts approaching 2100mhz.Georgel likes this. -
That's lovely!
We're going to get a lot of performance on 1080, but right now, I feel that someone who owns 980 will just keep it, like 980ti, given the price performance ratio. Especially on laptops.
But I have high hopes that 1080ti will be even more amazing.
Don't get me wrong, the results so far are absolutely amazing. I'm the happiest with 1060 scores, as those give a new chance at life to mid range segment, where most people are situated usually.
On the other hand, the last sentence might have been a lie. Most sold laptops in Romania are either 300$ laptops (It's a thing...) or 950m based laptops, so the performance in 1050 it's also going to be a really good step for the majority of people.
Now, for enthusiasts. This new gen is sweet. I wish that AMD would bring something better, and they would push each other in enthusiast segment, to keep improving things here, but it's good as long as we go forward.Ionising_Radiation and hmscott like this. -
We still need to wait for actual hardware delivery and benchmarks from shipping 1080 laptops to see what the real improvement over a 980 mobile desktop will be.
And, even so, if the best 1080 => Mobile performance doesn't unseat the 980 far enough to push people to upgrade, I don't think a 1080ti mobile will either - the limit is power and heat dissipation.
The 1080 desktop is drawing more power than the 980 desktop, so for top performance, more than 200w is going to be needed. A 1080ti will need even more power.
Given an upper limit of 200w for the P870, and maybe 250w for a P871, there is only so much performance that can be fit in a laptop - for some 980 laptops 165w.
The 1080ti parred down to fit max power/cooling might not perform any better than a 1080 parred down to fit power/cooling.
There might be small incremental performance gains 1080 => 1080ti in the same power / thermal spot, but not very cost effective.
What is your upgrade trigger point? 50% faster, or?Last edited: Jul 16, 2016 -
IF it ain't 50% faster, the upgrade does not make sense. Even 70% faster in some cases.
1060 laptops is 100% faster than 860m. At that point, it's worth to make the upgrade.
But after a point, we would stop and ask ourselves if most people would feel the difference.
980m and 980 and completely capable of fluid 60 fps on FHD resolutions.
And 4k in laptops is not ideal as a resolution. Even FHD in 17" laptops is sometimes hard to read. Of course, for gaming 4K should look sharper, and cleaner, not to mention color spaces coverage. But it's looking a bit like a world of sacrifice really.
Maybe if they pushed 120Hz screens in laptops, it would augment the need of a much stronger GPU in laptops, but otherwise, unless one's connecting it to an external monitor, it feels like having overkill. I completely need one for software development, game development, graphics, and many other things, but developers and media content creators are less in numbers than gamers.
The bottom line is that if we get a 30 % faster than 980, it's not worth to upgrade from 980. It is worth it, probably to upgrade from 980m or less.
If we get a 60% faster than 980, everything under 980 becomes a really good idea to upgrade, while 980 is still questionable. Simply because of prices. If one has a 980 based laptop, the upgrade means him loosing a lot of money, and if it ain't in a P870 with the desktop CPU and such, the upgrade means loosing enough to not be worth it anymore.Ashtrix, Papusan, Kade Storm and 2 others like this. -
Performance wise, that's about what I think too.
We need to see 100% performance improvement with the mobile desktop 1080 to feel like we are getting the same improvement as the 1080 desktop is giving against the 980 desktop.
With so much power from even a single 1080, with 2x 1080 SLI locking in 4k support, it would be nice if MS and app makers figured out how to fix the scaling issues.
At 18" a 4k display would be great, assuming all the scaling problems were to go away, but they haven't, so meh.
It would be nice if the laptop display makers got on board with 2k at high refresh to match the desktop display trend.
I don't want to be carrying around a huge 21:9 screen in my laptop, but a 144-240hz 18" 1440p laptop screen would be a nice fit for the 1x/2x 1080 performance.Georgel, Robbo99999 and Kade Storm like this. -
all this talk about upgrading at 60 percent...so i have a 965m and the 1080 performs 200 percent better then according to your logic i should upgrade?
now how much will this 1080 laptop cost vs the current cost of a 965m laptop...and then again most people buy midrange so a 1060 is 40 percent faster but how much does it cost....and then a 1070 is 80 percent faster worth the upgrade no? well how much does it cost for this gain...turing off AA and lowering shadow resolution also bumps performance 80 percent in some cases...
fease-abilityPrototime likes this. -
It's not a Law, you aren't required to upgrade when a particular % performance improvement level is attained.
It's more of a "we assume we could if the reason to do so existed", does that reason exist?
You can stay at the same performance level for many years, it's only if you desire an improvement, watching and perhaps even waiting, that trigger points like that are of interest.
It's a tipping point, a nexus of resource availability at a cost that is acceptable.
Some have the desire to stay at the front of potential performance advantage, for some it's less so, and for most it's not even a thought that occurs to them as long as what they have is doing a good job.
It's all fun
-
Are there any solid benchmarks for the GTX1060m, GTX1070m, GTX1080m also the GTX1060 (notebook) and GTX1080 (notebook), or are these just best guess comments?
-
It's not a Law, you aren't required to upgrade when a particular % performance improvement level is attained.
You can stay at the same performance level for many years, it's only if you desire an improvement, watching and perhaps even waiting, that trigger points like that are of interest.
Some have the desire to stay at the front of potential performance advantage no matter the cost, for some it's less so, and for most it's not even a thought that occurs to them as long as what they have is doing a good job.
We all find our happy place(s) in the continuum of performance upgrades, tuning, and eventually find a balance.
It's all fun
Last edited: Jul 17, 2016 -
Ionising_Radiation ?v = ve*ln(m0/m1)
Actually... comparing 860M and 1060 stock, 860M scores ~ 3700 (Graphics score ~ 4000) in the latest Fire Strike; the notebook 1060 supposedly has a score of ~ 11000 (Graphics score ~ 12000). That's 200, TWO FREAKING HUNDRED PER CENT over the 860M.TomJGX, Georgel, ryanev84 and 1 other person like this. -
Ionising_Radiation ?v = ve*ln(m0/m1)
I think these three GPUs don't exist. There isn't a point, when the notebook 1060 can already reach 65 W. The current GTX 860M has a TDP of 65 W.Georgel likes this.
Pascal: What do we know? Discussion, Latest News & Updates: 1000M Series GPU's
Discussion in 'Gaming (Software and Graphics Cards)' started by J.Dre, Oct 11, 2014.