It has, but there was no competition worth batting half an eyelash at. Now there is. Personally I still think if you want the tops in perf you go intel, but as I was explaining to Luna a couple weeks earlier, even with current-gen tech, considering average OCing capabilities, you can get an $800 CPU/mobo/RAM/cooler combo with Ryzen that'll be within about 10% of a $2000 CPU/mobo/RAM/cooler combo with broadwell-E. Skylake-E being a good OCer I expected and with the expected price drop for the equivalent cores, I'd have pushed for it well over Ryzen mainstream. But Threadripper, especially since 64 PCI/e CPU lanes for cheap versus needing $1000 CPU+ for 44 PCI/e CPU lanes on Skylake-E? Intel is now attempting to lose sales.
-
-
Keyword for that is Rendering / Simulations / Encoding.
-
Hummmm. No matter how I change it around to read. it's still a max of 1.375V. Even right now it runs at 1.5V, but the actually voltage is still a max of 1.375V because if it was 1.5V I would be able to bench everything at 5.3 and 5.4 ghz.
From what I was told at the hwbot event in Vegas, the earlier chips were pretty decent. They set quite a few world records with those chips. -
Very true, but the number of people who are so professional that it'd make a large difference to that point is rather small. Think about it. What's stopping someone from buying a $1500 Xeon for X99 instead? It'll need less cooling, won't OC well, but the core count is likely going to be far higher than 8 for $1000. It'd shred Ryzen.
But now with Threadripper vs Intel, well... you'd need to specifically be using the top end to realistically beat it (especially if a user isn't OCing), so $2000 chip, 18-core. But again a lower clocked, more-core Xeon might be the same or cheaper. As much as I dislike the direction Linus goes, when the 10-core broadwell-E chip launched for $1750, they made the distinction that a dual-CPU mobo and two lower clocked 8-core Xeon chips was only about $100 more, but the actual performance difference was huge.
I don't think Skylake-E is bad, but intel is basically LOOKING for reasons for people to not want to buy their stuff, in general.
Benching though, that's where I see the 18-core being best, since it's going to officially be easier to overclock. -
Actually that is not the case. Check out the base/boost clocks on those cheaper Multicore Xeons.
-
It means that if I set BIOS to anything lower than 1.375v (static voltage value not actual load voltage) the system will crash the chip at 4x5Ghz with default cache multi.
Maximum load voltage is the least of my concerns as the above already hovers just below 98c temp throttle during a short bench @5Ghz...Last edited: Jun 7, 2017 -
Anyways to bypass that 1.375V though? For idiots like me?
-
I was going off this video from this point.
The arguement there is rather sound to me, if you're going for pure rendering etc (pure CPU grunt) then there were other options. -
Their argument tbh is only sound if you are able to spend upwards of $2000 on at least a 12-16Core Xeon, anything less and its a total waste of money. Even in a dual system with 2 x 12 Core processors.
Unless you get them for super cheap, and they are strictly for render slave usage. -
But isn't that what I said? If it's basically only caring about rendering/etc.
-
Big difference between a render slave and a workstation.
Render Slaves are way more efficient if setup properly, since there is no GUI to deal with (in most cases) -
We need to get the @Prema lab a new chip.
-
maybe the Kaby lake X that Eurocom says will work in our models. Unreal
-
Ah, ok. Did not know that. I never had that problem with the last 4 chips I tested from the time I got this machine. Unless your bios is different from ours?
And it sounds like you need an external laptop cooler like the rest of us.
-
Exactly. That 1.375V wall is why 5.2GHz is the max benchable CPU OC for me also. Idle voltage can go higher, but under load that's where it always ends up being the max.
Testing a new mod now. Going to reinstall the new version of Fire Strike and see if it is better. Seems to be from my highest score and yours. Looks like they fixed the bugs without updating the version number.
http://www.3dmark.com/fs/12822424
-
So i updated (not clean installed) from 3509 to the latest version and since then i cant go over 208XX (total) and 255XX (Graphics) in FS single GPU no matter what. Any way to fix this other than having to re install the OS?Mr. Fox likes this.
-
Nah, it's not really fixed. There is a HUGE variance in graphics scores between runs with identical settings. I'm seeing swings of over 1,000 3DMarks between runs, whereas 3509 is pretty consistent from one run to the next. The new version is also very driver sensitive. If you use drivers in the 38X.XX branch scores will be lower than those in the 37X.XX branch. Uninstall it with Revo and then a registry cleaner after rebooting, then clean install 3509 and you will probably be back to normal again. That worked for me last time I downgraded 3DMark.
-
Gotcha. Ill do that and try a fresh install of the latest version once. @Johnksss@iBUYPOWER beat my single GPU FS score by 9 points using the latest version, so i was trying to do the same.
If nothing helps, then 3509 it is. -
http://www.3dmark.com/fs/12822560
(both runs after this were in the mid 28K range)
Pulled 738W on this one.
-
Yeah, and I just beat John's score on the very first run after going to an older driver, then three runs in a row after that sucked, so the new version of 3DMark is still a piece of crap. I think neither of those scores (mine or John's) are a good reflection of Fire Strike performance as much as they are a buggy fluke. 3509 I can make multiple runs back-to-back with hardly any fluctuation in scores.
-
Exactly the same behavior. Ill do a clean install and go back to 3509. -
@D2 Ultima im pretty much already sold on threadripper tbh, especially at those price points. i mean....16c/32t for 850 bucks? total nobrainer, especially if the entry model 16core 1998 can oc as high as the 1998x, as was the case with ryzen 7 1700 vs. top end 1800x. and in the end, those TR cpus are nothing more than ryzen 7 silicon slapped onto infinity fabric, thus sharing a single package
true, i wont expect the 1998x to beat the 18c top dog from intel, even when oced, but paying 50% of the price for 85% of the performance (worst case) id be all over that! plus, WHEN are we actually gonna SEE the top tier intel cpus? my guess is it wont be before well into 2018...by then, amd will have had half a year of availability!
as for clocks, rumors say that 4.2/4.3 ghz is the frequency to beat on air @ intel skl-x. since were talking same ryzen 7 silicon here, i wouldnt be surprised to see similar top oc values at 4.1-4.2 ghz as we saw with 1800x silicon, making the difference in top clocks basically negligible between the two contenders...
my prediction: were gonna see a 5-7% ipc advantage for intel, coupled with a 0-5% oc advantage. so 16c vs. 16c should go about +10-12% for intel, with 18c sklx vs. 1998x TR being around 20-25% difference.... naturally, thats in perfectly scaled apps that make use of all threads.
Sent from my HUAWEI NXT-AL10 using TapatalkLast edited: Jun 7, 2017Ashtrix likes this. -
Ok, so I'm confused. What is up with your battery backup voltage? 106V? Max 111V? Is voltage different in Arizona?
Last edited: Jun 7, 2017 -
If Intel's chips don't pass 4.2GHz, then they're worthless. It'll be broadwell-E all over again, and their cache has been re-worked. It benefits single thread workloads afaik, but should falter some more in multi-threaded workloads, so if you're considering direct IPC, it might even be that Threadripper and Skylake-E are 100% the same (with same RAM anyway).
However, again, remember that the chips need delidding, so if people can toss on a decent noctua cooler and achieve 4.8GHz on the top end chips, then intel takes the win. If not, there's no reason to buy them, the 18-core chip is a literal waste of money if as you say 85% of the performance is achieve-able on threadripper for a significantly reduced budget.
Now if only AMD would actually... make... their... products... work... properly!jaybee83 likes this. -
I don't know why it would be any different in AZ. I've never adjusted anything on the UPS. Just plugged it in to the wall outlet. I figured that would/should take care of itself. Am I wrong?
-
You are totally right! But it should read 120V on average. And hover around 118V Worse case to 122V.
I just tested all 7 of mine and they read the exact same thing 121V
91V? It should not even work. It has a built in low voltage alarm and cut off.
Very interesting stuff there, but if it's working then i guess that's all that matters.Mr. Fox likes this. -
All of the US runs at 120v, but some power companies can be a little less than useful about it. 91v should definitely trigger the in-built voltage regulation however, and force it to 120v as a constant. 110v, not so much. There is an acceptable tolerance for lower/raised voltage. For example:
My country, is pretty much idiotic/retarded/stupid/dumb/needs-to-be-removed-from-the-gene-pool-as-a-whole, and uses 115V power. On my P370SM3 I had voltage-related shutdowns from low voltage, which my line conditioner fixed. I was advised to run the line conditioner at 110v, and I did so for some time, but I still got shutdowns. This means that it fell below the acceptable voltage levels for Haswell's FIVR in my machine (I have since later confirmed it happens on desktop boards). I ALSO got "too high voltage" readouts on 110v, where the line conditioner forced my voltage to 110v flat, and I noted a significant reduction in say... my fan speed (there's a standing fan plugged into the line conditioner too) when this happened, so this means my gear generally sat above 110v.
Then, I tried changing the conditioner to 120v, to see if that'd fix the shutdowns. It did. But then I get a lot (and it's been happening FAR more frequently recently) of "low voltage" markers, where the tolerance voltage for 120v is bad. And this time, when it forces 120v, it's ALWAYS faster (using fan again) than what I normally get. So somehow I get voltage spikes near to 120 but never there, but frequently get low voltage. 115v folks! Great stuff!
Either way, this is why I recommend line conditioners at the minimum and UPSes at the maximum for really expensive stuff like these. I have no idea why it isn't changing 91v to 120 (or maybe it is but not saying anything), but 110v should work just fine I think. -
yeah baby! got DDR4-2933 tm5 stable, now tightening the timings
seems like VCCIO and VDDQ are helping out in this regard
one step closer to 3000... corsair should pay me for all this tweaking, as far as ive seen im basically the ONLY user on the internet who has gotten these friggin sticks to work stable at more than 2666 mhz
(as a reminder: these sticks are advertized as 3000 mhz cl16
)
will keep your guys updated with final timings and performance data
Sent from my HUAWEI NXT-AL10 using TapatalkLast edited: Jun 8, 2017 -
Let me know if they still work after the following test...(Should you get around to it)
You will need at least 1 hard windows crash
1 complete bios reset
1 windows install -
This was just emailed to Clevo. Think it'll fall on deaf ears?
Hello,
Since the Panther model, or P570WM, the P870DM, P870DM2&3 and P870KM1 are basically non upgradable. With the new x299 boards there are a bunch of processors that people could choose from. For people that want 4 core there's Kaby Lake X, for people that want more there's Skylake X. I would STRONGLY consider a P570WM refresh and get the DTR back up to what they were before. No one that purchases these is worried about the weight, if they were they would stay with the BGA trash that's out there. Get back on your old path. Get the Panther up and running again with the modern IO ports. You'll sell a truckload of them.
Thank you
Jon Webb -
Meaker@Sager Company Representative
Got the new IHS, seems pretty decent so far, I have not done too much testing as I was having a nostalgia trip playing my brother at the remastered constructor
Jon Webb, Prema, Papusan and 1 other person like this. -
did a fresh windows install between my 2900 and 2933 tweaking, so thats covered.
havent counted the number of times ive reset my bios, but definitely a LOT
as for a hard windows crash, not sure what u mean...? deliberately forcing a BSOD and check for stability afterwards or what? ive had several of those as well during my testing, so....
anyways, performance is basically the same as 2900 mhz, latency is actually a tiny bit better with copy throughput following suit, whereas read and write is a tiny bit lower. all in all, pretty much the same!
now comes the usual "RAM OC is Voodoo!" part:
timings are EXACTLY the same as with 2900 mhz, except for tREFI (which resets to a specific default value at different speeds, so cant control that anyways) and guess what: all voltages are at STOCK! stock uncore, stock vdimm 1.20V, stock VCCIO, stock VDDQ. aint that crazy?
heres the comparison:
2900 vs. 2933 Mhz
CL 18/18
RCD 19/19
RAS 39/39
CWL 18/18
FAW 14/14
REFI 11321/11454
RFC 518/518
RRD 9/9
RTP 9/9
WR 18/18
WTR 23/23
NMode 2/2
VDimm 1.30V/1.20V
VCCIO Stock/Stock
Uncore Stock/Stock
VDDQ Stock Stock
Now lets see if i can finally get 3000 mhz dialed in..... *sigh* -
@jaybee83 Then that sounds like you have the bases covered then!
Good job.Prema, Papusan, Mr. Fox and 1 other person like this. -
Anyone face this issue where 3DMark FS/TS fps gets stuck to 120fps and sometimes hovers to 170-180fps and then back to a constant 120?
@Mr. Fox @Johnksss@iBUYPOWER -
Are you sure that you are running the exact same driver on both cards?
-
Yeap. DDU'd as well.
Wait is it even possible to run different drivers for different cards, unless the user manually installs the driver for each individual adapter in the device manager? -
Windows would do it from time to time because it couldn't update the second card in time. For whatever reason. Or a have driver install where you only install to one card thinking it would install to both cards at the same time.
-
Interesting. Ill double check asa im home tonight. Never had this happen to me before though .
-
Yes, it used to happen a lot, especially with 3DMark 11. I would know as soon as I start the benchmark because it would launch at like 150 to 180 FPS instead of 300 FPS, and just stay at that FPS for a long time. I have not seen it as much lately, but I saw it happen a couple of times in the last week on Fire Strike and 3DMark 11 and Vantage. It behaves almost like v-sync is enabled even when it is not.
When I see that if I open NVIDIA Control Panel and change the "Multi-display/mixed-GPU acceleration" option to the opposite of what it is currently set to (does not matter if multiple or single display mode is selected) and apply the change, then immediately switch it back to what it was before and apply the change, my stuck framerate max get unstuck instantly. It works every time for me, even though Brother @D2 Ultima says it should not matter.Last edited: Jun 8, 2017 -
It really shouldn't; here, scroll down until you see the heading and give this a read:
http://www.tweakguides.com/NVFORCE_7.html -
LOL, I knew that would get your attention. I believe you that it shouldn't. The purpose of it has nothing to do with DirectX. All I know is that flipping that setting back and forth makes that problem go away. It may have nothing to do with the underlying cause, but it gets it unstuck somehow. I don't have an explanation as to why.
-
-
That would be the safest assumption, since they seem to have joined the performance benchmark manipulator's club.Ashtrix, Papusan, temp00876 and 1 other person like this.
-
I changed from centered to stretched in the 3DMark settings and that fixed it. Like wut ?
-
Magic. Wishes and dreams. Hopes and fairies. Wings and prayers. Cosplayer chesticles. Frank Azor. Clevo QC team. My accent.
Could be any of those really.
Sent from my OnePlus 1 using a coconut -
Thats the thing, it didnt work today morning. And even with centered, it was full screen.
Actually.. it still is full screen with centered. Nvidia control panel isn't set to force full screen.Last edited: Jun 9, 2017 -
Yeah, centered is always a huge nope for 3DMark benches. I don't know why but that caps the framerate. It doesn't work the same as windowed versus full screen. I'm not sure why they even have that option. Makes no sense to me. It screws up single or multi-GPU benchmark results.
Clevo Overclocker's Lounge
Discussion in 'Sager/Clevo Reviews & Owners' Lounges' started by Spartan@HIDevolution, Mar 4, 2016.