The Intel HEDT is over priced and needs reduced. The 2950X seems to be a killer CPU for the money if you need all those cores and threads. I don't so I will stick with my 6/12 and soon 8/16 core CPUs. I don't really need an 8/16 but will grab it for the hell of it and donate the 8700K to the laptop.
-
-
Lots more to watch, but I thought this one stood out @ 00:19:48
Threadripper 2 2990WX in-depth review & benchmarks
Last edited: Aug 14, 2018ajc9988 likes this. -
PCWorld review: https://www.pcworld.com/article/329...amds-32-core-cpu-is-insanely-fast.html?page=2Last edited: Aug 13, 2018hmscott likes this. -
ThreadRipper 2990WX vs 7980XE
Thread Load Scaling 1-64 Threads
ThreadRipper 2990WX vs 7980XE
Percent Performance 1-64 Threads
PC Worlds Dollars per Core chart with new ThreadRipper 2 CPUs added
Threadripper 2 2990WX in-depth review & benchmarksajc9988 likes this. -
Sent from my SM-G900P using TapatalkTalon, Vasudev, TANWare and 1 other person like this. -
Steve does not seems happy with the 2990wx, and rightfully so. It is a mixed bag and while some selected benchmarks do benefit some workloads do not. I doubt a 1990x would have faired better so it seems more reasonable as why they never saw the light of day.
Now if the CPU scaled up 53% across the board over the 2950x, it would have been awesome and a no brainer but this is far from the case.Talon, Vasudev, Papusan and 1 other person like this. -
But, Steve also shows he reviewed Phoronix's review on a custom compiled Linux with the tweaks for utilizing the 2990WX. That is why he kept stating there may be an issue of the windows scheduler, which makes sense to reduce latency hits on the other two dies. How much that is is yet to be seen. So I am looking forward to more information on that front.
Sent from my SM-G900P using Tapatalk -
Below is my speculation (salt shaker not only recommended but required);
I do not know if the Windows TS is that smart. Essentially under NUMA it should treat the CCX's as four computers but 2 of them being faster than the other two. If we use CB R15 to divide the power up it shows where 16 cores should give 3500 instead of 7000 then 32 cores yields 6000. This means the added cores only add 2500 to the score or 71% of the power. If badly schedules the tasks on the two fast CCX's could be waiting on the slow ones, not only that being as the fast ones are the direct memory access it could slow tasks down even further.
So it needs to take threads that are dependent on other threads completion and run them to the faster cores. CB R15 has very independent threads, this is why I use it to show power difference between cores. Now I know this is an over simplification, so do not shoot me. I am hoping this points an issue out though.Last edited: Aug 14, 2018 -
A Look At The Windows 10 vs. Linux Performance On AMD Threadripper 2990WX
Written by Michael Larabel in Operating Systems on 13 August 2018. Page 1 of 4. 30 Comments
https://www.phoronix.com/scan.php?page=article&item=2990wx-linux-windows&num=1
AMD Threadripper 2950X Offers Great Linux Performance At $900 USD
Written by Michael Larabel in Processors on 13 August 2018. Page 1 of 7. 10 Comments
https://www.phoronix.com/scan.php?page=article&item=amd-tr2950x-linux&num=1
"If you would like to see how your own Linux CPU performance compares to the results shown in this article, simply install the Phoronix Test Suite and run phoronix-test-suite benchmark 1808102-RA-AMD2950XT81."
AMD Threadripper 2990WX Linux Benchmarks: The 32-Core / 64-Thread Beast
Written by Michael Larabel in Processors on 13 August 2018. Page 1 of 11. 61 Comments
https://www.phoronix.com/scan.php?page=article&item=amd-linux-2990wx&num=1
"With the Phoronix Test Suite being open-source and designed to be reproducible and delivery fully-automated benchmarking, it is very easy to see how your own Linux system(s) compare to the Intel/AMD Linux CPU benchmarks shown in this article. With the Phoronix Test Suite on your Linux/BSD/macOS system, simply run phoronix-test-suite benchmark 1808115-RA-THREADRIP08 for your own fully-automated, side-by-side benchmarking comparison against the results in this article from test installation to test execution and result analysis."
AMD Threadripper 2990WX Cooling Performance - Testing Five Heatsinks & Two Water Coolers
Written by Michael Larabel in Peripherals on 13 August 2018. Page 1 of 5. 5 Comments
https://www.phoronix.com/scan.php?page=article&item=amd-2990wx-cooling&num=1
Linux Kernel Expectations For AMD Threadripper 2
Written by Michael Larabel in AMD on 9 August 2018 at 03:50 PM EDT. 11 Comments
https://www.phoronix.com/scan.php?page=news_item&px=Threadripper-2-Kernel-Bulletin
AMD Threadripper 2000 Series Details: Up To 32-Cores / 64-Threads With The 2990WX
Written by Michael Larabel in Processors on 6 August 2018. Page 1 of 1. 40 Comments
https://www.phoronix.com/scan.php?page=article&item=amd-tr2990wx-preview&num=1Last edited: Aug 16, 2018 -
And the FORBES article (which is referencing the Phoronix one)
AMD's New Threadripper 2990WX Much Faster On Linux Than Windows 10
https://www.forbes.com/sites/jasone...th-amd-threadripper-2-use-linux/#5ca649c239c9 -
A Look At Linux Gaming Performance Scaling On The Threadripper 2950X
Written by Michael Larabel in Linux Gaming on 16 August 2018. Page 1 of 4. 10 Comments
https://www.phoronix.com/scan.php?page=article&item=threadripper-2950x-gaming&num=1
"There are a lot of real-world Linux workloads that can benefit from 16 cores / 32 threads on the Threadripper 2950X (or even 32 cores / 64 threads with the Threadripper 2990WX), but Linux gaming isn't close to being one of them."
A Quick Look At The Windows Server vs. Linux Performance On The Threadripper 2990WX
Written by Michael Larabel in Operating Systems on 16 August 2018. Page 1 of 4. 14 Comments
https://www.phoronix.com/scan.php?page=article&item=windows-server-2990wx&num=1
"Windows Server did end up being faster than Windows 10 at the time to run various Git revision control system commands, but that too was still behind the Linux performance."Last edited: Aug 16, 2018 -
Threadripper 2nd Generation Interview with AMD's James Prior
Newegg Studios
Published on Aug 17, 2018
Trisha and JC talk to James Prior from AMD about the second generation of Threadripper https://www.newegg.com/promotions/amd/18-2407/index.html
This massive new CPU has 32 cores and 64 threads, making it a multitasker’s dream. They’ll be diving in deep, and answering all your questions about this incredible new CPU.
-
Hopefully there will be lots of used GPU's showing up soon on the 2nd hand market, and discounts on new Pascal GPU's to follow when the new gen Nvidia GPU's hit the shelves.
GeForce GTX 1070 Ti vs. Radeon RX Vega 56, 2018 Update [25 Game Benchmark]
Hardware Unboxed
Published on Jun 3, 2018
Can Custom Vega 64 Beat The GTX 1080?
2018 Update [27 Game Benchmark]
Hardware Unboxed
Published on Aug 16, 2018
-
Linux vs. Windows Benchmarks, Threadripper 2990WX vs. Core i9-7980XE
Hardware Unboxed
Published on Aug 20, 2018
GrandesBollas, ajc9988, TANWare and 1 other person like this. -
Now, for anyone complaining on kernel use, phoronix did optimized kernel compilations AND used the most recent kernel in some of his testing, but I don't think that was done in his windows vs. Linux run. But, any variation between head to head testing can really be attributed to using the O3 and March:native switches for compiling, as a note here. Also, the most recent kernel did have a couple optimizations, iirc.
Sent from my SM-G900P using Tapatalk -
I would dump Windows in a heart beat if the software I use and paid for ran on Linux. I would even re purchase some of the tools.
ajc9988, hmscott, Vasudev and 1 other person like this. -
Bios 3.30 for the X399 Taichi @ajc9988
https://www.asrock.com/MB/AMD/X399 Taichi/index.asp#BIOS -
Windows 10 patch KB4343909 addresses high CPU usage with AMD processors after June/July patches and AMD microcode updates self.Amd
https://www.reddit.com/r/Amd/comments/97a5p4/windows_10_patch_kb4343909_addresses_high_cpu/
Submitted 6 days ago by softskiller - announcement
- Addresses an issue that causes high CPU usage that results in performance degradation on some systems with Family 15h and 16h AMD processors. This issue occurs after installing the June 2018 or July 2018 Windows updates from Microsoft and the AMD microcode updates that address Spectre Variant 2 (CVE-2017-5715 – Branch Target Injection).
ajc9988 likes this. -
-
Other than the initial issue, it has seemed stable, so nothing bad to report atm, just that you may want to do the same for a fully default load and making sure the components are setting right in bios. -
-
Overclocked Dual-Epyc CPUs with Der8auer (Ft. Lots of Shade)
Gamers Nexus
Published on Aug 21, 2018
There is so much shade in this interview that we needed more light. Der8auer talks about overclocking two AMD EPYC CPUs for "the world's fastest dual-socket" PC.
-
Dropbox embraces AMD EPYC single-socket platform to support future growth
https://www.zdnet.com/article/dropb...gle-socket-platform-to-support-future-growth/ -
"Specifically, if you go the Intel path, you are looking for at least Haswell/Broadwell and ideally Skylake CPUs. If you are going with AMD, EPYC has quite impressive performance."
That shows how long the cycle is for bringing in new datacenter hardware.
https://blogs.dropbox.com/tech/2017/09/optimizing-web-servers-for-high-throughput-and-low-latency/
Here's the recent blog post from AMD:
Dropbox Designs its Custom-Built Infrastructure with Single-Socket AMD EPYC Platform
Posted by scott.aylorin AMD Business on Aug 21, 2018 11:01:15 AM
https://community.amd.com/community...tructure-with-single-socket-amd-epyc-platform
"As vast as the datacenter market is, it’s a relatively short list of companies working together in the day-to-day business. I don’t typically have the pleasure of engaging closely with a company that literally has hundreds of millions of customers like Dropbox. With over 500 million users and 300,000 Dropbox Business customers accessing its global collaboration platform, Dropbox is the latest big name in cloud to deploy the AMD EPYC™ processor in their custom-built infrastructure.
“AMD EPYC is a compelling processor option for our compute technology, providing Dropbox with the technical specifications required to support the workloads that matter to teams and our individual users,” said Rami Aljamal, Head of Hardware Engineering and Supply Chain at Dropbox. “We are excited to deploy EPYC processors and look forward to working closely with AMD in the future.”
Dropbox will leverage AMD EPYC™ 7351P one-socket processor platforms to support future growth beyond its current capabilities and refresh its existing infrastructure for its most demanding compute workloads.
The AMD EPYC™ 7000 series delivers compelling options for the Dropbox offering, meeting performance demands throughout evaluation, qualification and deployment. With 16 high-performance cores on the EPYC 7351P processor and leading-edge memory bandwidth, AMD continues to drive a strong balance of compute and connectivity while eliminating the need for a second socket." -
GeForce GTX 1080 Ti vs. Radeon RX Vega 64 OC - 4K Gaming Performance (i7-8700K)
TechEpiphany
Published on Aug 17, 2018
00:01 - Far Cry 5
01:02 - Middle-earth: Shadow of War
01:52 - Total War: Warhammer II
02:55 - Deus Ex: Mankind DividedVasudev likes this. -
Intel's fear has caused them to announce this:
So, they are throwing their board partners under the bus to allow backward compatibility to Haswell because of fear of making them buy new platforms, which favors just switching to AMD. It throws out the two gen cadence and shows they artificially restricted sockets. But, this is trying to get the drop in sales rather than full replacement servers. But, if the boards are not wired for the specs on the new chips, that will limit what features of the new chips can be used (like 48 lanes cannot be used on boards wired for only 40, etc.).
This just seems really desperate on their part, honestly. -
AMD also has 40% more stream processors (which are good for computing yes, but don't translate to gaming well at all), lower clock speeds, and lower memory bandwitdh limiting overall performance on Vega (one of the main reasons for this was manuf. process which was designed for low clocks and mobile parts - the voltages for any given frequency are bound to be higher in comparison to NV and will reach the 'threshold' for comfortable levels with relatively lower clocks).
Considering what AMD had to work with... the results aren't bad at all, and should turn to AMD favor with 7nm (or at least, equalize things with Nvidia's products also on 7nm - unless Nvidia managed to make a new architecture, but I doubt that's the case - at best we are looking at a refresh of Pascal with some improvements as allowed by 12nm for example and 7nm).
If you noticed, Pascal is usually clocked up to 30% higher than Vega on the core alone... thanks in great part to the manuf. process they use.
Vega is a multipurpose card... not a sole gaming oriented GPU.
Also, isn't the 1080ti an overclocked 1080?
EDIT: I made an obvious error in asking isn't the 1080ti an overclocked 1080... there are clear differences between the said GPU's on a hardware level which give 1080ti an advantage in games... but those differences also give it an edge over Vega 64 that's been overclocked as well... so we shouldn't really be surprised of those results - and yet... it actually amazes me that people claim AMD clearly 'failed' with Vega when in fact they don't have to be at the very top in regards to gaming GPU's.
Still, the inferior 14nm LPP process didn't work in their favor... whether things improve on 7nm however remains to be seen.Last edited: Aug 26, 2018Vasudev likes this. -
-
yrekabakery Notebook Virtuoso
-
Papusan and yrekabakery like this.
-
For the record, I don't hate them.... I merely elaborated on the difference in manuf. processes and some other technical data between Vega and Pascal.
Granted, my statement of 1080ti being an overclocked 1080 was incorrect, the remainder of what I said however stands.
Besides, Vega 64 is an equivalent to 1080... not 1080ti.
1080ti also has more ROPs than 1080, along with texture units, ROP's, VRAM speed and bandwidth that give it more 'oomph' beyond clock speed alone.
The fact that an overclocked vega 64 gets within 10% of it is not bad... but also, not exactly impressive since a 1080 with a decent overclock could likely do something similar.
Simply speaking, AMD never made a GPU in the 1080ti category... plus, I'm not sure they HAVE to.
Shooting for the highest performance with their resources doesn't exactly strike me as a good idea, especially since its a niche market to begin with.
AMD's approach relies more on 'versatility' of the GPU... which at the moment is relatively under-utilized by the industry which primarily optimizes for NV. -
If you don't hate them, your actions are not reflecting on your thoughts about them. I see no other logical way you could insist on overpaying for less and be satisfied with it, other than you cannot stand the thought of the alternative companies.yrekabakery likes this. -
Now, granted, me saying the 1080ti is an overclocked 1080 is an obvious error... but you also said its 'nice' that 1080ti stock performs favorably against OC Vega which deserves a similar reaction.
Especially when you cononsider that 1080ti has more ROP's, higher frequencies etc. to an overclocked Vega GPU that's an equivalent to 1080 at stock settings.
Its not unexpected at all that 1080ti would perform better.
Plus, the initial promise of B350 platform upgradeability presented itself as an opportunity... until Asus decided (much later on) to disclose that they won't be releasing BIOS updates for CPU upgrades (nevertheless, that doesn't make the 1700 any less capable in the long run for my needs).
Second: I had no idea that Asus were a 'known-horrible notebook manufacturer' and the responses I got about the prior to my purchase were less than descriptive of them as a 'bad company'... as for Asus 'neutering' the hardware... if you're referring to the mobile RX 580 in GL702ZC, one could say the same about Nvidia's mobile 1060. Both are 'neutered' to the point where they don't have the same performance as their desktop counterparts and have certain TDP limits so they can be placed into a laptop... even though a 17" chassis is more than capable of housing more powerful hardware (And I expressed these concerns after I got the unit).
Third: the mobile RX 580 in GL702ZC was already operating at lower frequencies and was limited to LOWER TDP (meaning it was more efficient) than mobile 1060 while producing similar or same performance, depending on the game used.
It wasn't AMD's fault that Asus botched the cooling or that they decided to drop the frequencies as low as they have (or that they didn't bother optimize the voltages for the GPU at given frequencies) - and with due respect, I'm hardly clairvoyant - nor am I the first one to make a mistake in purchasing a laptop from a bad OEM - because I actually thought that Asus would make an effort and do things right.
The issues I encountered with the unit went well beyond the time frame any reviewers would be able to catch and also, subsequent units of the same model but ones that were produced at a later date seem to operate within advertised specs... its the initial batch that seems to be having failure issues (and that's the batch I apparently got).
As for the power brick being large... considering the GL702ZC comes with a 65W TDP CPU (Desktop grade) no less and has no IGP, the thing has specific power demands.
Large power bricks for laptops that house desktop grade hardware on either side (Intel NV or AMD) aren't unheard of.
Not exactly comfortable, but that's the trade-off.
If I 'overpayed' for the GL702ZC, I could say that every person who bought Intel/NV comparable laptop is also overpaying for their purchases.
Do I think Asus charged more money for the GL702ZC than they should have for what they offered? Of course I do... I even wrote as much when I first got the laptop and tested it.
For that matter, I think the current prices of laptops with nothing but APU's in them are ridiculously overpriced... and yet, EVERYONE keeps saying how its because of increased market prices.
And, if I recall correctly, I also said I would have asked of Asus for a refund, but that option was not available to me anymore, so I had to resort to RMA... but I guess those 'details' don't really matter.Last edited: Aug 24, 2018 -
Oh boy this is gonna be a long one.
ASUS chose the latter. 65W RX 580, when the full RX 480 required 192W to play witcher 3 at 1440p at STOCK CLOCKS once you disabled power throttling. The RX 580 barely handles 980M performance, far less arriving near a 1060N. And if you want to say it can beat a 980M, then overclock the 980M; something you don't have the power envelope to do in that ASUS (or the cooling, apparently, considering how hot it gets while being bafflingly loud... it's about as loud as my P870, and almost as large too).
As for ASUS not releasing updates... they always. Without a shadow of a doubt. Prevented upgrades. Even when they used MXM they used a custom PCB design so you couldn't physically insert a later MXM card even from their own units into your previous gen unit... you ALWAYS need to buy-upgrade with ASUS. And further to that, I also doubt the VRM solution on that board supports an upgrade in the first place, like most B350 boards; 2700X chips burn em out easy. Though I would forgive you for not realizing this fact, but then the unit isn't sold with a 1700X or 1800X, and those have higher power limits, so... some signs were there.
Now onto the 1060N... the 1060N is worst-case scenario 15% slower than the desktop card, usually less (without any voltage frequency tuning that is; since it was heavily talked about for AMD's side it's fair game here). The OC vBIOS for the 1060 which is 90W TGP is much closer in performance to desktop (probably nearly matching with undervolts). The 65W RX 580 is UNDER 980M performance in quite a few games according to notebookcheck... ignore the synthetics, look at the games. Fortnite and Star Wars Battlefront are both over 15% slower than a 980M, which is generally 30% slower than a 980, with a 980 being what the 1060 ballparks. That's nowhere CLOSE to being the desktop card performance, at all.
So once again, I make the statement: Your actions don't reflect what you said your thoughts are. When I told you RX 480s were ill-suited to mobile, you denied me. Look at the existing implementation now. The other unit that has it (with still-neutered performance) is a HP Omen with 120W TDP and unknown but certainly higher-than-120W TGP/TBP, far above even the high performance 1070N vBIOSes which are 125W TGP, which destroy it in performance, far less after undervolting is applied. Your above post JUST got through defending them doing this, saying it was ASUS' fault and not the tech's fault. You're still insisting Zen/Zen+ is the best case for a notebook, despite all known implementations lacking high speed memory support and any form of overclocking, with a 2700X unit still-to-come, when you could be using 4.8GHz 8700Ks for your production all this time with well over 3000MHz memory in a P870TM1 or even P750TM1 if the load was CPU-only and you bought it from the right vendor... it would not have been much more with a 1060N in it either in terms of price for your storage/memory setup, with better performance across the board.
But you still insist on all-AMD. What am I supposed to think? And then you didn't even know a 1080 and a 1080Ti were two different cards, AFTER Pascal's entire life cycle ended, meaning you barely even glanced at their product line.
Edit: Was pointed out that the HP Omen's 580 is ~120W board power and 85W TDP, not 120W TDP and higher board power. Still matches it to a 1070N which cleans its clock, though, but I am corrected on that.Last edited: Aug 24, 2018Cass-Olé, Papusan, yrekabakery and 3 others like this. -
Its you who make too much emphasis on the GPU (and appear to make quite a lot of errors in the process).
The 7700HQ has about half the performance of 1700 and wouldn't have been adequate for long term productivity.
Which laptop at the time had an 8 core/16th CPU of that performance available on the market for the same price?
None to my knowledge - and that was in October last year (heck the GL702ZC was actually reviewed two months BEFORE it became available here in UK).
So basically, I seem to have paid more for having a desktop grade CPU in a laptop.
Then there are infamous issues with Intel CPU's in laptops which end up throttling (I noticed those particular problems being reported here frequently enough) due to poor IHS (which plagues most Intel CPU's on laptops and desktops) and also poorly implemented thermal paste by plenty of other OEM's.
GL702ZC's flaws aside, Asus DID manage to implement the thermal paste decently, and compared to the rest of their ROG line (or other laptops) the GL702ZC didn't seem to suffer from throttling issues or too high temperatures (although I DID mention that Asus could have done FAR better for a 17" chassis).
As for RX 580 neutered performance.... let's get a few things straight... at the time early reviews were released, they claimed about 5-10% performance differential, and honestly, the RX 580 handled everything I threw at it well to the point I didn't really find any differences that would be worth mentioning.
Plus, the GL702ZC came with Freesync... and future drivers DID manage to increase overall performance on RX 580 in the unit.
Also consider the fact that the RX 580 was limited to 68W... whereas mobile 1060 is limited to 80W.
NV had more room to maneuver and was built on a more efficient process which allowed it to be clocked higher... Asus chose to restrict AMD more than necessary (whereas in reality, they could have also limited it to 80W, optimized the voltages and called it a day - at which point, performance would likely be similar enough to the point it wouldn't really be noticed - and as for the AMD title being 36% slower on rX 580... was that tested on DX11 or DX12? I noticed notebookcheck seldom compares games in DX12 and don't really publish which mode they are using - in DX12, the differential between the two gpu's is much smaller, or goes to RX favor actually).
I also don't care how much power the full RX 480 or 580 need... because those are different power demands which also use the highest possible numbers on desktops without any voltage optimizations (and you should know by now that AMD overvolts their GPU's to increase yields, whereas Nvidia locked its voltages and clocks for the most part and didn't suffer same issues with production like AMD did).
Furthermore, its not as if the desktop 1060 was too low on power consumption.
When desktop RX 480 and 580 were undervolted, they both consumed similar amount of power as 1060.
They were still a bit higher than NV in power consumption, but that is accountable to manuf. process limitations AMD had to use.
Also, correct me if I'm wrong, but 1060 is clocked higher than Polaris on desktop (same thing is apparent with Pascal vs Vega) - whereas producing similar/same performance.
Furthermore, there were no other all AMD laptops on the market at the time.
That was the only unit that presented itself as viable for me and my needs... and when it worked, it worked well.
As for me insisting on an all AMD laptop at this time... so what?
It doesn't mean I hate Nvidia or Intel.
It actually means I want to support AMD at this time (especially given both NV and Intel previous/shady market practices - which still doesn't mean I fundamentally hate them) and I consider their hardware viable (and it is when properly optimized) - this is also what I stated before, but somehow you managed to overlook this (although its entirely possible you will just discard this reasoning to reinforce what you already said- so forgive me if I will no longer try to justify my actions to you... and to be honest, I don't have to. I merely chose to write my reasons now so you can better understand why I chose AMD... and if you won't accept that response, then that's your problem - it also doesn't make your opinion correct).
OEM's making mistakes with AMD hardware on laptops is an entirely separate issue... and OEM's are responsible for optimizing their units... which of course they almost never do properly, and have a history of cutting corners with AMD.
Asus at the time seemed as if that they were doing things differently with AMD.
Should this failure issue not get resolved and results in a refund... plus if no other option presents itself, then I will of course consider Intel/NV laptop.
But don't act as if this issue is entirely AMD's fault, or that its inherently not viable for mine or anyone's else use.
You've been making same comments for the desktop versions.
People can get AMD or Nvidia without hating on either company.
Our choice to support either doesn't have to mean we hate them... is it so impossible for you to understand that some of us want to give AMD a chance after they already proved themselves on desktop?
AMD still has some limitations due to the existing manuf. process which limit its efficiency and performance, but that doesn't mean it cannot be optimized to work properly by OEM's (or users) who choose to use it... and if they do, its partly their responsibility to make it work properly (it is also AMD's responsibility to optimize their own hardware from the start... but given their limited resources at the time, not sure if they could have afforded to do so).
Also, the failures I experiences with GL702ZC could have happened with other OEM's... in fact, they tend to occur in situations when people adopt early hardware releases, so its a risk to a degree with any OEM - Asus is hardly an exception with this.Last edited: Aug 24, 2018sniffin likes this. -
If you don't care, then why are you discussing it? If it doesn't matter why is it upsetting? If you feel someone is using hate, then pray tell re-read, since this is the internet and anyone that gets their heartbeat revved up to quickly can mistake anything for a personal attack.
SO please can we get BACK to technical discussions vs sticks and stones? Thanks <3 -
D2 was the one who pushed the topic into this direction. I openly admitted my mistake in saying 1080ti was an overclocked Vega 64... everything else was an attempt at trying to explain the reasons behind my decisions as he apparently operates under certain assumptions of me which I tried to correct.
As for your other claims.. they are rather bad assumptions, and frankly, I just stopped caring.
Good nightsniffin likes this. -
yrekabakery Notebook Virtuoso
Voltage and clock manipulation can be done using the curve editor in MSI Afterburner, it is not locked down:
Pot meet kettle, I guess. -
You can claim everything is wrong, but its not just about you, its about ensuring you don't mislead other people here as well, the goal is good information with backup and support, not "guys I believe its this way and we should all keep this mindset, don't believe anything anyone else says cuz I said it first"
Anyways thank you for at least backing down vs spilling more into this requiring further Damage control from the more savvy personnel here. -
RE the discussion, that 1700 stomps all over anything Intel has released in the notebook market, fast RAM or no. -
Book below
So like I said... if you want to make the point about undervolting for performance, you need to apply it to Nvidia too... and oh boy does undervolting ever grant quite a bit of performance there on mobile. So, this point is pretty invalid.
yrekabakery likes this. -
yrekabakery Notebook Virtuoso
-
How quaint.
If you also check GL702ZC and Star Wars battlefront II video on youtube, the player in question managed to play above 60FPS most of the time:
You should also know that Notebookcheck also touts that Mass Effect Andromeda gets less than 30 FPS in 1080p High preset mode on GL702ZC... whereas in reality, the actual gaming performance is far higher, above 60FPS when I played it.
The game never stressed my RX 580 to the maximum and temperatures were in 75 deg C range.
The game that DID manage to stress my GL702ZC RX 580 to the maximum was Rise of the Tomb Raider... this was fairly evident from the temperature and fan noise among other things... and FPS were fairly high above 60.
So, I'd take those benchmarks on Notebookcheck with a pinch of salt as I fairly doubt their 'averages' reflect on total gaming performance (Especially since benchmarks of those games aren't usually representative of actual gaming performance)... and possibly could be affected by relatively bad scaling across 8 cores (whereas most of the 1060 laptops that the GL702ZC were compared to had 4c CPU's).
The main consistency that came out was that 1060 laptop version was about 10% faster than RX 580 in GL702ZC.
But so what?
TGP between the two GPU's may not be the same, but I also think people are complaining too much about it.
1060 is more efficient yes... I also pointed out why this is the case.
Besides, D2 is also consistently saying GL702ZC users (or rather myself) 'wasted' their money.
In comparison to what?
Here in UK, 8700k laptops with 1060 and otherwise same specs (SSD, HDD and RAM setup) are MORE expensive than GL702ZC.
Finally, 8700k laptops weren't exactly around here in Europe when GL702ZC was out in the market.
Besides, he keeps 'mentioning' about something that cannot be changed in the first place.
I have to go through the RMA with asus because I have no other choice. Getting a refund is not an option at this stage due to Asus protocols.
So, pointing out every single thing people think is wrong about the GL702ZC isn't exactly productive.
Plenty of similar shortcomings can be found in other Intel/NV laptops...Last edited: Aug 25, 2018 -
ALLurGroceries Vegan Vermin Super Moderator
Some posts have been deleted by the mod team. One post was un-deleted by the mod team to provide context.
Please keep in mind that English is not everyone's first language. English grammar can be a challenge even for those who have grown up speaking it natively. A tech forum such as this is not the place to be arguing about grammar. We have noted the posters who attempted to derail this conversation with their linguistic critiques and will be issuing infractions if this behavior continues in the future.toughasnails, Dannemand, CaerCadarn and 2 others like this. -
Warning: they swapped the colors, Green is AMD and Red is Nvidia
GTX 1060 vs RX 580 - Post RTX Update
UFD Tech
Published on Aug 24, 2018
Which GPU would you choose currently? Which one did you decide on previously? Do you think the landscape between the GTX 1060 6GB & the RX 580 have changed since the RTX 20-series was announced?
-
Has anyone tried running DDR4-3200MHZ on these AMD Ryzen 2500U CPU's? I believe they could benefit greater from more RAM & faster RAM... I think its bottlenecked by the RAM, because sometimes the graphics performance is phenomenal, and other times terrible when its running out of RAM & tapping into the PAGEFILE.
I'm considering upgrading to 2x 16GB DDR4-3200MHZ is why & a 10TB M2 SSD (Teasing) haha lmao -
yrekabakery Notebook Virtuoso
-
AMD's Ryzen CPUs (Ryzen/TR/Epyc) & Vega/Polaris/Navi GPUs
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Rage Set, Dec 14, 2016.