I noticed there are three ways:
1. Get an extremely fast GPU now: This does not suit me since I never go for the very high end. Not only because of cost but because the very high end is also overpriced at its prime.
2. Get a replaceable GPU: This sounds nice though I often find better offers locally on other laptops. I might be able to get a laptop from a neighboring country with a better make, say a Sager though I worry about warranty etc.
3. Have a very good eGPU setup. I actually LOVE this idea. The reason for this is that I use my laptop as a mobile desktop that I set up in two different places with external monitors and all. So for my lifestyle, an extra external device is not that bad especially if it's light (a small ATX for example is NOT, contrary to what one may assume, they are huge compared to laptops).
So.. IMO I'd probably go with a laptop with a Thunderbolt or another fast eGPU method though a. I hate Apple and it's overpriced stuff, so it would have to be one of those rare other machines that do it. And I worry about its supporting equipment being overpriced. Another similar technology would be better.
Of course it would be amazing if they wised up and offered an easy way for x16 PCI-E very directly, the mobos are perfectly capable.
Of course there's still the solution of swappable GPUs though they might be more constly than that and my lifestyle doesn't care too much about mobility, mainly about weight being low. And eGPU is low enough.
-
1 & 2 go hand in hand. Top end GPU's are usually replaceable. And to be honest your best way to go is to get the fastest you can afford, and if you can the top end card. Only reason being is that laptop GPU's are quite expensive, and not really cost effective to keep upgrading your GPU.
eGPU's are a mixed bag. They work only on a limited number of laptops, and are quirky to get working. -
saturnotaku Notebook Nobel Laureate
-
They are viable for certain lifestyles. e.g. I do move the thing a lot but NOT as a laptop, but strictly as a mobile desktop. i.e. I set it up on a desk with external peripherals for hours on two different places.
In that framework, eGPU is spectacularly beneficial: In a very light package GPU is admirable.
The main problem in my opinion is Intel being so good in CPUs compared to what others are on GPUs. A modern i7 lasts for around 5 years, a laptop GPU is often only good for a year if one doesn't pay very close attention to it initially.
It's not strictly anyone's fault, but Intel has its own fabs, others beg others for manufacturing and logically their technology is behind. -
bigtonyman Desktop Powa!!!
you're comparing apples to oranges there buddy. There is a reason we still have discrete graphics and don't rely on the intel igpu. -
-
I am not sure what you nmean by a modern i7 "lasts" for five years. Firstly, it is not possible to get that number because the core "i" series hasn't been out for five years yet. Secondly, the very first chips released with the "i" nomenclature codenamed Bloomfield, were numbered 9xx. You hardly see any of those anymore. Most have upgraded. That being said, the reason GPUs "seem" to last not as long is because developers come out with games that are very hard on the GPUs. On the other hand, on a forum like this, you won't see many people who use software that is very hard on the CPU. In our lab, we need to upgrade out CPUs at the very MOST in two years. It all depends on what you do. For games, yes GPUs "seem" to last less. Although the GTX580m from three years ago is still a pretty powerful beast. For software like running highly optimized code in parallel, CPUs will be obsolete in two years. For working on MS Word, CPUs may not become obsolete in several years. I still have a HP Brio from 11 years ago with a 550MHz P3 that runs the Office Suite just fine. I doubt I can play CounterStrike on it. That doesn't mean that that P3 "lasted" longer than the GPU on it (a 64 MB nVidia GeForce 4 MX420).
-
Yep. CPU's will usually bottleneck games less quickly than GPU's, but 5 years is quite a stretch. As maverick1989 noted, 5 year old CPU's will manage basic tasks fine, but they won't power modern GPUs. Look at the Core 2 Quad CPU's which weren't very plentiful, the Q9000 mainly, is 4 years old and couldn't manage 90% of the games released today. It would just limit the performance way too much. Prior to that there were only dual core mobile CPU's and those would be like putting a lawn mower engine in a ferrari. It just isn't going to work. There have been users with a gen 1 i7 XM CPU that upgraded to a 680m and found the performance less than stellar.
-
I got the next best GPU 3 years ago (5850m GDDR5) and it still holds up in modern games today at high details and 1080p except in Crysis 3 and a few other heavy titles.
-
Notebook with:
- High end videocard with 4GB VRAM. Enough memory for future games with big textures. Also make sure its a notebook with MXM module because that means you can swap out the GPU and replace it with a better one. Preferably MXM Type 3.0B because it allow 100W GPUs.
- 16GB RAM. A bit overkill but its cheap so why not?! Atleast you are settled for the future.
- i7 from the newest CPU architecture. This is one component you can`t swap out because Intel use different chipset and/or different socket for each generation of new CPUs. It really doesn`t matter what CPU you buy as long as it is a Quad core. They are more than powerful enough to drive the fastest GPUs out there.
- 240W PSU. Notebooks today is delivered mostly with 180W PSU. That is too little, because A) it won`t allow you to overclock much with the newest and fastest GPUs. And who knows what power requirements the upcoming GPUs have.
- SSD. Forget about HDDs. -
-
You can mod your own PSU to get 240W if needed. That's what many Clevo owners are doing. But if you want one from the beginning then you'll have to get a Clevo 17" or Alieware 17". Asus I think comes with a 230W PSU with the 780m but they tend to have proprietary GPU's. MSI so far only offers 180W but use the battery to get the extra boost of power when needed.
-
-
The best way to future proof the best you can is:
1. Get the fastest gpu you can possibly get for your budget
2. Make sure it's true mxm so you can stretch it without buying a whole new laptop
3. Be prepared to live with turning down some details at some point -
Option 1 from your list. Recycle after 3-4 years.
-
-
If you overclock it normally won't be running > 180W consistently, but when it does, it will draw extra power from the battery, and likely no more than 20W, and when it drops below 180W it will be charging the battery, so you should be able to get several hours if not more from even a high demanding game. -
I don't understand the obsession of some people to promote what they bought themselves as "necessary". e.g. the guy with the SSD.
What? Have you got any idea how useless a disk is to GPU performance? In fact, for most applications, the disk is not used AT ALL once they are loaded once, provided RAM is enough.
Kids have to learn that a computer is made by the Memory, The CPU, The GPU and their interconnections. Stop ridiculing yourself with pretending other stuff are as important.
It's especially ridiculous since disks can be promoted in the future anyway so it's completely off topic since it's de facto future proof. -
I got at "random" a GPU that came with the first Sandy Bridge i7s because I didn't know better or had time to look for another or wait for another. It's around 15-20% slower than what you got but that's not enough to not be considered aging today. Sure, you did get an admirable GPU that lasted for 3 years but after that you almost hope for the whole thing to die so you can move on if it's unswappable.
I keep thinking I'd prefer robust eGPU solutions. I don't use the battery of the laptop, it's mostly a mobile desktop, but even mini ITXs are too heavy and bulky to actually carry. Even an eGPU would be able to be carried compared to an ITX.
It mainly bugs me enormously that the i7 is a beast that doesn't bottleneck the thing in the slightest, but GPUs are way far behind.
And I keep thinking NVIDIA / AMD not having access to the smallest factor transistors is THE factor.
NVIDIA didn't inform their fans that they couldn't move to 22nm even if they wanted to for cost reasons for nothing. -
-
-
Personally, I just buy the best system for my needs and then use it until I see a compelling reason to upgrade. Buying a system with an MXM-standard graphics card or using an e-GPU setup may put off the inevitable a bit, but there's nothing really future-proof.
-
You have no idea how modern games work. That's all I have to say. If they had to wait for a disk during live rendering, not only they would be slow, they would be hanging. Anything they do with loading must be pre-loading, never for the actual live scene. The live scene must always be pre-loaded for any FPS that goes above about 1FPS.
-
-
Right. During the "Loading..." screen scenario assets are being priortized, organized, and loaded into vRAM and system RAM for rapid fetching, but in no way are 100% of the assets loaded for a given level. Then you'd be limited to 1GB, 2GB, whatever vRAM limits are, and even then you can't use 100% of available vRAM because a large portion of it is utilized for processing the scenes.
-
-
I reiterate. You have no idea about 3D graphics programming. Don't let your selfishness stop you learning about it.
In the overwhelming majority of cases VRAM gets ABSOLUTELY no bottlenecks from petty disks during live rendering (not initial loading). If your GPU has VRAM related issues, I assure you, NO DISK will improve it. Well, unless we're talking about extremities of using a disk from the 90s and can't even load the rare 1MB here and there a common game might need once in a while, but that's rarely the case and often just bad programming. fread()s must be avoided as much as possible at all times and always be at least on a second thread and never limit the main thread.
Unless you're strictly talking about loading half the World of Warcraft only when flying on a gryphon at super speed to only new (and non-raid) locations, in which case, KEK.
In fact EVEN THEN the overwhelming majority of good games have done the right thing and fread()s are strictly limited to a second thread hence it doesn't affect rendering speed even in the rare case it's loading slowly.
You will not be limited by petty disks 95% of the time, and about 100% of the time in competitive games (FPS, e-sports, MMO raiding, etc.) and that's a fact.
It's especially funny that the more competitive and "important" a game or part of a game is the more likely to be designed to avoid loading from disks.
i.e. spare me the lectures about "important to have SSDs for GPU performance". I wasn't born yesterday. -
In addition, you've already said that you don't want to bother with basically the only way to future proof your GPU, soooo this thread seems pretty pointless.
What is the best way to future-proof a laptop's GPU? (well, not forever, but for the most possible)
Discussion in 'Gaming (Software and Graphics Cards)' started by leladax, Jun 20, 2013.