You tagged on the Vega thing is why I put that. I get the price/performance metric and I get why to get AMD now (performance now, 7nm soon). That is where it came from (this coming from me who plans on getting 8 vega 64 for mining in a couple days)....
-
-
Last edited: Aug 10, 2017
-
Edit: at current price of crypto plus the 70MH/s rate, that is almost $1K per month at my current energy costs. So, next year, all gravy and pay my simple bills. -
Good luck with that all, 12 GPU's is a big investment!!
Let us know how the hashing really goes, it might even exceed 100+ -
Last edited: Aug 10, 2017
-
I'll start with 8. That is $4K, hence waiting on TR. $5k after psu, mb, etc.
I'll definitely let you know. Also, I'll see how the cards work with my 6700K before putting them to work! ;-)
Sent from my SM-G900P using Tapatalk
Edit: this is based on ether, dash, zcash to recover ROI, then have enough block chains for 1yr+ service, while trading the currencies like in the market to maximize return. I've been reading tax treatment documents for fun lately...hmscott likes this. -
Glad to see the Ryzen chips doing as well as they are. While they may not be the new king, the price to performance factor should certainly have Intel considering some changes in all aspects of the market. As you can see by my signature I don't update very often. My last upgrade kind of screwed me as far as longevity goes. Intel quickly abandoned the LGA 1366 socket except for their server chips. So for my next upgrade with AMD promising to support AM4 at least until 2020 I know where my money will go. If AMD can bring a competitive GPU to market with Vega and keep the prices down I'll even consider snagging one of those too.
I'm no fan of AMD (at least not since the days of using a pencil to enable overclocking, for those old enough to remember), and I'm no fan of Intel. I'm a fan of getting the best usage for the money spent, and right now it looks like that's AMD. So congrats to them for finally bringing some competition to the market. It's been long overdue.hmscott, Papusan, Mr. Fox and 1 other person like this. -
Support.2@XOTIC PC Company Representative
You could probably make something like this work. -
Sent from my SM-G900P using Tapatalk -
Support.2@XOTIC PC Company Representative
-
-
-
Intel, you get about 15%-25% over the all core boost. All that means is they left performance on the table to make you feel better. There are multiple factors to determine final performance. I could give a **** about speed if the numbers are there. Faster isn't always better, or I'd ask where your FX chip from AMD is at. Also, did you see the power draw from the 7900X? You can pull 400W on that 10 core if completely opening it up, whereas the system wattage was 400 and the chip wattage was 285W on the TR. Do you even know numbers? Try reading and understanding before speaking. At 285W when overclocked, it outperforms Intel's 10 core pulling [email protected]. So, let's back up and try that again, shall we? Also, learn to read the numbers over the slant, like the bias at PCPer, shall we...
hmscott likes this. -
This is the max OC these reviews reported for the 1950X w/o going to extreme voltage and/or exotic cooling, or having a golden sample:
TweakTown - 4 GHz
TechSpot - 4 GHz (358W system draw, 101W increase over stock, 30W < 7900X @ 4.5 GHz)
Guru3D - 4 GHz (382W system draw, 121W increase over stock)
Ars Technica - 3.9 GHz (552W system draw, 101W increase over stock, 45W > 7900X @ 4.6 GHz)
PC Gamer - 3.9 GHz (intermittent crashing, >400W system draw, >50% increase over stock, >90C temps, 3.8 GHz @ 1.25V fully stable)
KitGuru - 3.95 GHz (522W system draw, 190W increase over stock, 123W > 7900X @ 4.6 GHz)
Gamers Nexus - 4 GHz (283W CPU-only draw, 132W increase over stock, 55W > 7900X @ 4.5 GHz)
I think I see a trend...ole!!! likes this. -
As BIOS updates come out, a lower voltage one I've heard is on the way, faster memory support and better cooling plates - the current adapters don't have good coverage (See Gamers Nexus video), things should pick up and solidify 4.0ghz, with 4.1ghz/4.2ghz possible with more voltage and/or better cooling.
There's no need for a rocket launch with every PC, stable solid performance outperforming x299 / i9 @ lower power draws by 100w+ at 1/2 the price is enoughLast edited: Aug 11, 2017AZHIGHWAYZ likes this. -
The TR build, on the other hand, is going to be my playground. Extracting the highest level of performance out this platform is going to be tough but will be the most rewarding, at least in my eyes. Everything is going to be a contributing factor to maximize TR's performance. I haven't been this excited in years about the potential of a new platform.
After the smoke clears, the i9 will still be the "overall" performance king but that toothpaste on the die kills it for me. -
Very sad results from all of them. Also, if it is not primarily user ignorance, 50% is not a good ratio of winners versus losers in the lottery.
It might actually be a blessing that Intel has not soldered the IHS. One might get better temps after delid with Conductonaut or CLU on the die than it would with factory soldered IHS. Don't let your fear about voided warranty shackle you and keep you from doing what is best for you and your beast. That is a terrible ball and chain to have to drag around and it will stifle your pleasure if you allow it to control you.Last edited: Aug 11, 2017temp00876, ajc9988, hmscott and 1 other person like this. -
Also, I agree he picked the worst reviews, because everything I saw said the opposite. Many of those are WAY under frequency with WAY higher temps than what I saw from the reviewers I trust.
PCPer gave the exact line of 400W bs and presented it in the same way he did, of which he didn't collect the data until I said it and used very specific reviewers. The "professional" reviewers, except for a handful, don't know actually how to do what we do.
You want a good, trustworthy review, check out this from OC3D, using true OC techniques, and look at the 4.2 OC scores. He didn't get a binned sample, reviewers don't know what they are doing, didn't know the junction temp offset is 27C higher than the die temp, etc.
https://www.overclock3d.net/reviews...extreme_and_ryzen_1950x_threadripper_review/1
Lots of bad info on reviews out there. Also, I've seen the spread of scores. The less I trust a site, I've noticed they have lower scores on certain benches.
Sent from my SM-G900P using Tapatalk
Edit:
So it looks like a 326.8W CPU core power according to HWinfo. Not 400, whereas you should see his works and Der8auer's on the 7900X power draw.Last edited: Aug 11, 2017hmscott likes this. -
Guru3D - Max temp on die for OC - 55C
Tweaktown - Max temp on die for OC - 85.5C
TechSpot - Max temp on die for OC - 56.5C
Ars Technica - couldn't get 4.1 stable so went to 3.9 and set it to 1.35V max just because AMD said 1.35V? says 78C temp, but ZERO pictures to confirm and likely was at 51C on die due to the offset, with numbers that seem WAY high compared to the others
PC Gamer - only mentions above 90C (likely the offset again); used 1.375V for 3.9 (didn't know about Vdroop and likely was doing many things wrong).
Kit Guru - used Level 1 LLC (completely WRONG setting, which should be like level 3 LLC, IIRC), 1.3875V (see first part), Claims 85C, but if you check the offset, the temp was likely 58C
GamersNexus - used direct line reading - 283W under load and temps in the 60s to 70s (estimating because they gave delta over ambient)
You conflate the comparison data stated on the 7900X on purpose, without giving direct watt to watt comparison @Carrot Top , I wonder why?Last edited: Aug 11, 2017 -
This (and budget constraints) are why I will watch and wait until all of the talking heads go away and HWBOT is filled with examples of the best Team Red and Team Blue have to offer. I will compare what guys like us do with ThreadRipper to what guys like us do with i9 Extreme and make a decision how to spend my money with eyes wide open. I stopped being an early adopter a while ago. Even when I do adopt early, it is with great fear and trepidation. That is a dangerous way to live in terms of finances. But, it's not that bad waiting either, because it is fun to watch the saga unfold. And, it makes me very happy to see some hardcore rivalry and contemptuous dialogue for a change. Seemed like those days were over. It so nice to see it coming back. -
Support.2@XOTIC PC Company Representative
Yeah, it's the extreme side of customization potential while still being somewhat self-contained. Like the opposite of a Zenbook. There really is something for everyone.
And I'm in the same boat. I went desktop/tablet and I'm not really on the road enough to justify a laptop right now.ajc9988 likes this. -
-
one getting 4.2, the others getting 3.9 and 4.0 average, first glance can tell right away is cherry picked. -
I noticed an article on TR vs Intel's i9 and they say that Intel has superiority in IPC and clock speeds.
https://arstechnica.com/gadgets/2017/08/amd-threadripper-review-1950x-1920x/
Articles like those keep reinforcing the idea AMD lags behind Intel in overall IPC performance.
This is way too exacerbated... if compared to clock per clock... Intel manages at best 18% increase in single threaded performance over Threadripper for example because it is able to overclock a single core to 4.5 GhZ (and lo and behold, that clock rate is much higher in comparison to AMD's).
Where are the graphs comparing clock per clock ratios between AMD and Intel?
I'd like to see equalized single threaded performance on say both Threadripper 12c/24th and i9 7900x clocked at 3.8 GhZ singe core and then see how much is the difference in IPC.
If Intel's 18 core CPU's maintain a much lower base clock rate in comparison to TR (less than 3 GhZ), in multithreaded programs they will likely LAG behind AMD.
Not to mention the fact that most programs out there still aren't optimized for Ryzen and TR.ajc9988 likes this. -
Ain't that stupid.
-
-
Intel would only be better with moore cores. And not all buying processors with maxed number cores. Intel's best weapon is OC'ing. Nothing more than that.
-
Then, after that is done, I'll go to the 7900X to try to find the same, noting any differences in hardware used (ram speeds and timings, graphics cards used, etc.). Give me a little time, as I found more data points, including an article with 4.1GHz, then I have to go through the videos of reviewers that don't have articles or blogs but that are widely shared to varying degrees.
Sent from my SM-G900P using Tapatalk
Edit: so far, 4.0-4.1 seems common. At 4.15-4.2 seems toward the cherry end. Hot hardware, I believe, got 4.175 with their AW review, but was a chip sent by AMD, so may be cherry.Papusan likes this. -
imo each individual should figure out their needs and has the responsibility of learning hardware they purchase to an extent and yes sometimes it is upseting to see your favoured hardware not getting the praise they should have receive. imho most people dont care enough which is why overclocking and enthuiast side of thing is dying. -
-
-
ajc9988 likes this.
-
But we saw a similar issue with Intel needing better cooling, too, or being limited to 4.5 or 4.4 on some reviews. So you have a point there. But, one of them yesterday said they couldn't do 4.1 stable, so they did 3.9. Also, AMD has .025 increments. So stepping back 0.2 automatically invalidates the inclusion of that result. The other two were worried on voltage, but one admitted to using the LLC setting that allows for the largest Vdroop, meaning under load his voltage was way less. If he didn't compensate, this means he wasn't properly volting or knowing his limit, which invalidates that 3.9 score as not knowing what to do.
But, you are right heat seems to hit both AMD and Intel CPUs hard for HEDT, except for the redesign on VRM cooling that was used on X399, which seems to have fixed what was seen on the X299 platform.
Sent from my SM-G900P using Tapatalk -
Intel gets an additional boost from being able to turbo boost a SINGLE CORE to 4.5 GhZ.
This gives a very skewed picture to people when it comes to AMD's performance - and sensationalizes something beyond measure.
I would like to see a more tempered representation of both hardware's strengths and weaknesses. -
-
But it is worth noting that optimizations for Ryzen gave a great uplift to AMDs old CMT architectures on performance, suggesting that there is a bias in programming. But I don't want to rehash that conversation as we've had it many times. The only question is adoption of the hardware and whether the software engineers then optimize for it. But even with this, we have seen awesome scaling on SMT compared to HT. -
-
Let's wait for release and actual comparable comparisons, none of what you've given is comparable. -
if its me i'd buy a 5ghz cpu but only run it at 4.8ghz because i dont want voltage to go over 1.3 or 1.25v dependin g on temp i'd lower it more, just so i can have a high enough frequency with excellent temp. in a way its stupid and waste of money but just my preference.
if people think 5ghz intel vs 4ghz AMD is like 30% faster in IPC then they could be wrong in one way or another. -
-
Sent from my SM-G900P using Tapatalk -
That's why I said, wait for real tests with comparable results, not snip picked smatterings of unrelated runs.
I don't want this kind of time wasting stuff, I want real comparisons.
Be patient the ThreadRipper scores and comparisons will come out, as will the RX Vega, and about 2-3 months from now the release and real production benchmarks for Coffee Lake will arrive, and we can compare them against Ryzen 2 and ThreadRipperPapusan likes this. -
The 6700K 3dm11 physics score is what I get. Of course oc'd but with half of R1700 cores and who is already used in laptops. Remember Asusbook?
And locked firmware(no oc possible). I talk about processor who can and will be used in laptops.
Last edited: Aug 12, 2017hmscott likes this. -
On IPC they are dead wrong... on clock speed, closer to accurate.
Plus, most of the software out there is optimized for Intel hardware (which reviewers seldom mention)... not AMD. Majority of testing is done on programs that AMD needs to brute force it's way through (or is not getting used on properly - Threadripper was seldom being stressed to capacity in mostly tested software for example in programs that didn't really optimize for Ryzen, let alone TR).
I just detest these kinds of incomplete reviews. They're cheap, focus on gaming for the most part, unoptimized software and don't take other things into account to give readers proper information,.
There was barely 1 highly comprehensive article on how Intel skewed AMD APU's for instance in terms of OEM's and damaged their influence in the market for over 10 years. That was just about the only thing I saw that came close to accurate description of what happened in a whole segment of the market. -
-
-
-
Perhaps on 14nm+ (which seems less likely as this process will probably improve clock rates and efficiency to a certain degree)... more likely we could get TR at 7nm (in Ryzen 2 form) and greatly reduced TDP with slightly higher clock speeds, and improved IPC.
Of course, by Ryzen 2, we will likely see either Vega or Navi being used as an IGP in AMD APU's.
Navi would be of great benefit as it could for example connect 2 or more mobile Polaris RX 580's via infinity fabric. Though there are rumors that Navi will simply be Vega with IF (which wouldn't be bad at all depending on it's performance - it DOES have huge amount of compute hardware which needs to be geared towards gaming via developers to be fully used).
If 14nm+ alone results in an increase of efficiency and clock speeds, by the time 7nm rolls out, I wouldn't be surprised if AMD decided to make an entry level dedicated GPU with 4 infinity fabric connected Polaris GPU's that have been enhanced on an architectural level.
Of course, this depends on whether 14nm+ will affect AMD GPU's - but they also need to move away from the manuf. process that prohibits high clock speeds if they want to boost the GPU beyond architecture while maintaining or improving efficiency.Last edited: Aug 13, 2017 -
-
hmscott, Rage Set, Papusan and 1 other person like this.
Ryzen vs i7 (Mainstream); Threadripper vs i9 (HEDT); X299 vs X399/TRX40; Xeon vs Epyc
Discussion in 'Hardware Components and Aftermarket Upgrades' started by ajc9988, Jun 7, 2017.