Unfortunately, I couldn't find a clean spec sheet on the Gigabyte or Asus boards. We have info from back at Computex and a couple new screen shots, but not much more. Considering pre-sale on the 27th, pricing will be known soon.
@hmscott , @TANWare , @Papusan - Did you notice the supported speeds on those Asrock boards? 3600MT/s, which means they have QVL for that speed most likely. They didn't say if that was a single dimm or full quad channel, though. But wanted to point it out.
-
-
-
MSI gets quite a few ram overclocking top scores, but so does Asrock. Asus is usually in the top 2 or 3 boards for overclocking. Gigabyte is a strong performer. But, until I see some reviews, I cannot jump, as usually one board is stronger in one area while another beats it in a second area. You have to look at whether you want the strongest memory performance, the VRM and heat performance, the strongest OC, BIOS/UEFI ease of use and features, if water cooling, whether there will be a compatible block (especially if buying a full coverage block for the board, and when it will be available), etc.hmscott likes this. -
From a quick look over thw MSI Gaming looked like the nicest one.
-
MSI looks good, but I'm waiting to see performance. On Ryzen 7, I don't think they had bclk.
Sent from my SM-G900P using Tapatalkhmscott likes this. -
tilleroftheearth Wisdom listens quietly...
See:
http://www.tomshardware.com/reviews/intel-core-i7-7820x-skylake-x,5127-9.html
While the i7-7820x Skylake platform may not offer the best overall performance (I'm ignoring gaming 'scores' as they're irrelevant to me), what I get from the above article is that even with the optimizations Ryzen has seen in the last few months - Intel is still the overall productivity choice. The specific Intel processor you need to choose varies depending on your workloads, of course.
AMD is close; but productivity doesn't reward 'close' - it rewards 'best', absolute, period.
I see past all that though and what is left is that AMD is only competitive in very specific scenarios (price being the biggest one and depending on the actual workload/workflow; high core/thread performance), overall.
What is eye-opening for me is that the idle power spec's of AMD's platform make it unsuitable for my usage style (I leave most of my systems on 24/7/365).
And just in case you think that under-volting can make a difference;
Thanks again to AMD (and their fans) for pushing Intel a little more than what we had become accustomed to.ole!!! likes this. -
This does not mean to carry on in another thread. Things and plans change to meet the market, enough said.
-
FYI...recently made recap's of Intel's long sorted history of "fair play" with AMD...
Intel - Anti-Competitive, Anti-Consumer, Anti-Technology.
Are we MAD at Intel?!
http://forum.notebookreview.com/threads/intel’s-core-x-i9-and-i7-series-x299-xeon-1p-2p.804776/page-13#post-10572105Last edited: Jul 28, 2017ajc9988 likes this. -
AMD R3 1300X Review vs. 7350K & More | Intel's Response
ajc9988 likes this. -
To date, AMD said they will not unlock an EPYC 1P CPU, but there are plans to unlock future feature sets. Now, imagine they unlocked one 1P chip that is 16/24/32 cores and superO made a gaming board for it, where you could use riser cables for the pcie, etc. 128 lanes. Especially since Nvidia is following suit, doing an mcm design, which means they might remove the bs artificial cap on two cards for Nvidia SLI. Dreams...
Sent from my SM-G900P using TapatalkLast edited by a moderator: Jul 27, 2017Papusan, hmscott, Mr. Fox and 1 other person like this. -
-
that aside, silicon lottery on skylake-x chips looking good, 4.8-5ghz 6-10c definitely doable, too bad they dont come in a laptop. its quite sad clevo didn't agree with what we and eurocom wanted, truly pathetic -
-
don_svetlio In the Pipe, Five by Five.
-
Last edited: Jul 28, 2017
-
seems unlikely for a first gen 7nm to hit 5ghz right off the bat, even the best of the best sandybridge 32nm was 2nd revision of 32nm and skylakex 14nm too. well people can still be wishful and hope for the best.
also wtf threadripper this better come soldered.
update: looks like its soldered good stuff -
Typical OCer's tune. I used to be the same till I figured out that it is fast enough to get my stuff done. Like I said if it does it in a blink of an eye who cares when it is finished while my eye is still closed so long as it is done before it is opened.
hmscott likes this. -
-
-
-
-
Sent from my SM-G900P using Tapatalk -
-
ajc9988 likes this.
-
Support.2@XOTIC PC Company Representative
-
ajc9988 likes this.
-
Please stop talking about Clevo manufacturing failures, it has nothing to do with the topic of this thread, thanks.
-
AMD EPYC vs Intel Xeon Scalable 2S Architecture Ultimate Deep Dive
Last edited: Jul 28, 2017 -
I am not sure if this was posted already but get out your salt shakers. http://techreport.com/news/32310/rumor-mysterious-slide-lists-full-intel-core-i9-specifications
If true, the war has only begun... -
I can see the TB 3.0 as it is limited to 2 cores but TB 2.0 seems a bit much if for all cores. I think the turbo/OC without a delid on all cores will be 1-1.2 GHz over base. Still this will be admirable for those CPU's against the TR
-
-
For now TB 3.0 will work great for allot of games and single/dual thread apps. As apps start to use all those cores TB 3.0 usefullness will diminish, but that is for much later on.
ajc9988 likes this. -
-
"The technology is key for 'an era of Moore’s law-plus where we’re getting new density advantages at each node and cost advantages as each new node matures, but mask costs are going up and chip frequencies are not going up, so how we put solutions together is critical to sustain the pace of development,' he said." http://www.guru3d.com/news-story/amd-cto-talks-about-moving-to-7nm.html
"In software, 'my call to action for the EDA community…is to redouble their efforts to take advantage of more CPU cores and parallelism…As the processing required for 7nm escalates…their algorithm optimization needs to take advantage of the very technology they are helping us manufacture,' he said, noting AMD’s new Epyc processors sports 32 dual-threaded cores." Id.
Just a little to chew on considering my last comment on the 4.7GHz vs 4.0 being a push. It should be noted most benchmarks still favor single threaded IPC and some are not suited for testing MT tasks well (while others are). Because of this, Intel may hold the crown, but it is a case of software testing for specific tasks that the industry is currently evolving away from. We've been through the arguments on when to dive in, including a recommendation for many to wait until the real battle, which is Zen 2 vs Ice Lake (7nm vs 10nm+). So, this is why I'm slightly less concerned. If the HCC wind up too close to 4GHz from Intel (assuming that TR cannot go past 4GHz avg.), we could see a situation where there isn't enough at the single thread performance and not enough MT scaling to get a recommendation to get Intel, especially at a 70% cost premium.Last edited: Jul 28, 2017temp00876 likes this. -
Last edited: Jul 28, 2017Papusan, temp00876, tilleroftheearth and 2 others like this.
-
Sent from my SM-G900P using Tapatalk
Edit: also, clock is king because of programming not keeping up with tech and Intel's artificial limitation on quad core for mainstream, as was seen with dual core to quad transition.Mr. Fox likes this. -
Last edited: Jul 28, 2017Papusan, tilleroftheearth, ole!!! and 2 others like this.
-
I find it amazing. Everyone agrees Intel for years has participated in anti trust behavior for years. The have kept AMD out of other OEM systems when they were a better CPU, they engaged in having programing benchmark code made to favor their own CPU even though it was slower to look good. They have participated in having code optimized specifically for their CPU's. The have tried to start proprietary standards to lock out the competition ( IA-64).
So we still fall into the trap of not advancing Moore's law. Even Intel has realized the future lies in multi core and multi thread. It no longer impresses me trying to soup up the CPU that drives yesterdays software. I am more impressed by the CPU's of the future for Moore's law, no matter who makes it.
In the end we need to fall out of that OC mentality as it is too singularly dimensional. Yes awards are nice, no one will dispute that however the reward of having sponsored better general computing to me far out weights that. -
-
-
hmscott likes this.
-
Last edited: Jul 28, 2017Papusan, Rage Set, TBoneSan and 1 other person like this. -
As to the belly button comment, what do you think every unlocked CPU is. Everyone with that chip gets about the same, with the silicon lottery playing a role. Now, a better OCer can squeeze a lot out of a lemon, but it will still be a lemon. You get the same thing on Ryzen. They don't all get the same. The best get 4.2, almost all hit 3.9, which means you have the same exact range you get on Intel on the silicon lottery, just it is clocked lower. Now if with the IPC advantage AND the speed advantage at certain tasks only gives you a 50/50 on which CPU wins, then it is coming down to programming and optimizations.
This brings me to @TANWare 's point about Intel trying to hand out proprietary compilers to purposefully use a WAY slower instruction set if AMD was found instead of Intel. We see this in a lot of programs today, where the FX series is now able to beat SandyBridge handily all due to coding for better multithreading because of Ryzen. That means that the FX was ALWAYS a better chip, and that programmers were doing this BS to make single-thread programs or only optimizing for Intel on purpose, thereby holding us to quad core just like we saw with quad cores.
And that brings me to your original statement that the hexacore chips should have annihilated the quad core chip. They very well would if it were not for the code not being better at handling MT instructions. It's why we have seen the wall on games at six cores for awhile, with little scaling above that. On the speed, that is the laws of thermodynamics: more cores = more heat (although they now have shown lasers that can remove heat, which may be useful if they can figure out a process for that as we transition to light based information transmission over classical mediums).
Now, AMD designed the new chips by scaling back the R&D engineers, but focusing on talent. They had to find the best way to get into the game again without blowing the budget and going belly-up next year in bankruptcy. I feel they already accomplished this. Now, after done fighting for survival, you see them working on a better process than Intel's which may beat logic density of Cannonlake (Ice lake may be denser because of refinements in the process, so leaving it open; under traditional calculation on the density). Intel has been trying to go elsewhere for extra revenue due to lagging computer sales. Because of this, they have bled money all over the IoT market, all while taking their eye off of the ball.
We'll see where it goes...Papusan, TBoneSan, TANWare and 1 other person like this. -
My firm belief is that when AMD first was making TR and Epyc they were using Stepping 1 of the CCX's. I think there were major issues and it was not even as effective as Intel's ring fabric they were using. In fact it was so bad that Intel never even planned on over a 12 core to combat it. Epyc's were probably even worse so Xeon's were posed no threat at all but eventually they would be getting updates as well.
Stepping 2 though came in and the Epyc started to pose a real threat, and subsequently TR as well. This caused Intel to use a redesign of their fabric for better response above 12 cores and the cache changes as well. The rest is now current history unfolding. -
What I am not OK with is a normalization, where you pay the same and get the same as the guy next door, and that's all there is to it because it's won't do more if you try to force the issue. Or, it does so little more that it's not worth the bother.
Remember 7970M versus 680M. AMD fanboys were happy to have an NVIDIA GPU killer. And, they did for a few minutes. Then once the overclocking and vBIOS mods started, 680M tore 7970M a new heiny hole and that was the end of it because 7970M sucked at overclocking. Then they started dropping like flies because of people trying to catch up with 680M and they weren't build to handle it.Last edited: Jul 28, 2017Papusan likes this. -
Not only that, AMD kicked up R&D spending this last quarter and are working desperately to succeed on 7nm, where the true war is fought against Intel. They needed cash flow to do it, and Ryzen is that cash flow (as Vega is...). Intel doesn't want their bottom line to hurt, but doesn't want to kill AMD because if they do, they can be broken up! Also, on SMT, Intel's HT doesn't come close!
Intel cannot really respond until Ice Lake, and considering IBM's process is used and the IPC on their products, that may just shift it all. But it still comes down to software developers getting on board! Look at Mafia III, the only game with "optimizations" for Ryzen where the Ryzen scores WENT DOWN while Intel's went up. Or look at the poo-stain Gears of War 4, which also shows Intel with a huge lead (which is also born out in the productivity suite Office, where a quad core is king still, which is a joke)! So, until those issues are addressed and software developers get off their butts, you'll see this crap with them saying Intel's better when it may not be. -
Well, yes and no. If we measure the results and everything is tilted in favor of Intel on the software side and Intel continues to beat AMD, guess what... it's better based on results. Good intentions are nice and admirable, but the only thing that really matters at the end of the day is results. I don't agree that a CPU is actually better unless it produces better results. The reasons why don't matter if the results don't show it is the better product. It's good to teach kids that how you play the game matters. Integrity is important. But, if you play the game right you might lose. Losing always sucks. Doing the right thing will always be important and we need to sleep at night, but losing always sucks. We have to accept that sometimes things are just going to not end the way we want them to, and then go find something else to do.
Last edited: Jul 28, 2017 -
The way you say it here, it is like there is no difference between the guy who overclocks and doesn't, when every single MHz actually does help here.Mr. Fox likes this. -
As to the research advantage, at 4%-10% improvement per year, they are out of runway. They hit the shrink wall and everyone is lapping them. All they have left is the entire server market, which is a huge amount of their income (the server market is $16B; the enterprise client PCs come next, and that is being challenged with the Ryzen Pro series very well, especially since Intel i3 cannot do security on vPro, whereas AMD can do the equivalent with dash). So the fight hasn't even begun for the large marketshare. HEDT gives notoriety. Records give notoriety and status. But HEDT is only a high margin, small market.
Ryzen vs i7 (Mainstream); Threadripper vs i9 (HEDT); X299 vs X399/TRX40; Xeon vs Epyc
Discussion in 'Hardware Components and Aftermarket Upgrades' started by ajc9988, Jun 7, 2017.