Well, I told you. You wont see HBM MXM module unless nGreedia says so. BTW do you notice the TDP - 95W?
-
-
-
It's way above that in the current implementation (~125W).
-
-
Check the data sheet.
-
ah i think triturbo is hinting at binned chips with higher efficiency vs. previous m295x models
Sent from my Nexus 5 using Tapatalk -
-
Yet it runs pretty hot in the iMac where the 780m was running a bit cooler (still not comfortably cool, but that's iMac for you).
-
-
-
HaloGod2012 Notebook Virtuoso
-
Can I see a firestrike? -
-
TomJGX likes this.
-
I think people are mistaking hitting 1500MHz and for 7000MHz effective or something. I know Jaybee managed something like 1508MHz in the P750ZM thread, but that's a very far cry from 1700MHz. -
Most were from like 5600-5800MHz (1400-1450MHz)
10-15% is expected, but much more than that, then you're talking premium silicon. It would be nice to be able to adjust vRAM voltage a bit though to push it a bit more. But in most cases, at least for mobile GPU, not sure if vRAM really is much of the limiting factor. 5000MHz is probably the "sweet spot" for mobile GPU's in the same way 1600MHz DDR3 RAM is the current "sweet spot" for system RAM. -
-
Usually, cards come with one of three kinds of memory today. GDDR3, GDDR5 and HBM (High Bandwidth Memory). They also have a memory clock, and a memory bus. All of these variables combine for the memory bandwidth. Just like clock speed, more memory bandwidth is a good thing. Games do not usually benefit much from increased memory bandwidth though, so don't expect huge gains from overclocking memory in most games. Some games do, but I don't remember any of their names off-hand.
HBM is different from GDDR3 and GDDR5 in mostly physical ways. Calculation-wise it's very similar (as I expand on below), and thus I am not giving it its own section. HBM 1.0 (currently on R9 Fury and R9 Fury X cards) is limited to 4GB. HBM 2.0 will not be. Since googling about HBM provides many articles explaining how it works physically, I will defer to those rather than attempt to explain it again here (if you've noticed, I did not explain about GDDR3/GDDR5's physical makeup more than was necessary).
Your memory clock is represented in multiple different ways. There is your base clock, which is usually an exceedingly low number. nVidia gaming-class cards since Kepler (GTX 600/700 series) came out have had a standard of 1500MHz for the desktop lineup in terms of memory speed. AMD has been using less for the most part (with 7000 and R9 2xx series) but has bumped the speed recently (R9 3xx series). This clock speed is not what you're going to be too concerned with; you should be concerned with your effective memory clock. Your effective memory clock depends on the type of video memory you have, which I will explain below:
- GDDR3 memory (which you won't find in midrange or high-end cards these days) doubles that clock speed. So a card with 1500MHz memory clock using GDDR3 RAM will have a 3000MHz effective memory clock.
- GDDR5 memory (which you will find everywhere in midrange and high-end cards these days) doubles GDDR3's doubler. In other words, it multiplies the clock speed by 4. So a card with 1500MHz memory clock using GDDR5 RAM will have a 6000MHz effective memory clock.
- HBM memory (only present in two cards right now) also doubles the clock speed, similarly to GDDR3. So a card with "500MHz" memory clock (like this) will have an effective memory clock of 1000MHz (despite that link ironically claiming the effective clock is 500MHz).
Now there are three ways one usually reads the memory from a card with GDDR5 RAM. Let's use the GTX 680 as an example. Some programs and people list the actual clock, which is 1500MHz. One such program is GPU-Z. Other programs list the doubled clock speed; which would be 3000MHz. Those programs are often overclockers such as nVidia Inspector. MSI Afterburner also works on the doubled clock speed, though it does not list the clocks themselves. Then finally, the effective clock speed is often seen in sensor-type parts of programs; such as GPU-Z's sensor page. Please remember which clock your program works with when overclocking. If you want to go from 6000MHz to 7000MHz for example, you would need +500MHz boost in MSI Afterburner.
HBM is read by both GPU-Z and MSI Afterburner at its default clock rate, and is indeed overclockable via MSI Afterburner (though not by Catalyst Control Center). I am unsure of other tools people use for AMD OCing that aren't Catalyst Control Center or MSI Afterburner, but there is a chance it may be OC-able by other programs.HTWingNut likes this. -
Okay makes sense
-
HaloGod2012 Notebook Virtuoso
Here you all go, 6400mhz. I'm betting I can go higher, but don't care to
http://www.3dmark.com/3dm/8741117? -
Think you could show us a picture of your chip?
Also, wow, that physics score for a 3.5GHz chip. My 4800MQ did what, 10413 or so? I wonder if broadwell's L4 cache benefits physics in firestrike some... -
HaloGod2012 Notebook Virtuoso
Yes I can get a picture , need to add an ssd again soon anyways. The 5700hq has been awesome , overclocked to 3.7 I can do over 12k score on the physics . Best part of this chip , NO throttling at all ....unlike the horrible 4700, 4710, and 4720hq chips I dealt with last yearTomJGX and moviemarketing like this. -
If you break 12K score on 3.7GHz on that chip, you're definitely being helped either by something. I'm alluding it to the L4 cache. Looks like Broadwell is better than Skylake for the hardcore benchers... Broadwell-E might be the best benchmark CPU instead of Skylake-E due to that.
Well, if skylake can clock higher it might be better to use Skylake... 5960X chips can pull 400W+ on their own; broadwell usually draws more TDP than haswell too; a broadwell 8-core might be monstrous and hot. -
How about a GPU-z screenshot? Not that I don't believe, but I believe GPU-z more than Futuremark.
-
OHNOZ! noob mistake, argh...
D2 is right, I was thinking along the lines of: 1253 Mhz stock vRAM + 510 Mhz offset = 1763 Mhz *4 = 7052 Mhz effective speed
what i didnt consider: Nvidia Inspector calculates the offset based on 1253 *2 = 2506 Mhz + 510 Mhz = 3016 Mhz *2 = 6032 Mhz effective!
sorry for the confusion guys!
in any case, heres a screenie:
Prema likes this. -
HaloGod2012 Notebook Virtuoso
-
HaloGod2012 Notebook Virtuoso
-
-
The amount of vRAM has a strong impact on OC-ability.
The 4GB GTX980M vRAM clocks a lot higher than the 8GB.
Many of the 6GB 970M already do stable 3000/6000Mhz+, while very few 8GB 980M run stable at that speed.
Here my AW 18 beta vBIOS in SLI with two Clevo cards:
http://www.3dmark.com/3dm11/10341773
Last edited: Sep 30, 2015 -
-
-
-
futuremark has the issue of usually polling the clocks when the hardware is at idle, so u often get way lower readings
-
-
HaloGod2012 Notebook Virtuoso
-
-
Because more GB are always harder to OC. More traces, more load on the board and 100% higher chance that one of the vRAM chips can't keep up.
The GPU above your post is idle and does nothing while the other one was running a 3DMark bench (see vRAM speed in the benchmark result). -
HaloGod2012 Notebook Virtuoso
-
-
What I DO believe is that some people get decent chips with decent voltages and almost never pass their TDP limits. My 4800MQ at 3.5GHz and 0.9976v for example almost never passes 47W unless I render something (hits 50W), livestream (similar W usage) or run something like Linpack (hits ~84W). A 4xxxHQ chip that comes with something like 1.15v at stock is far more susceptible to it. I'd just say you're lucky, and enjoy your luck.HaloGod2012 likes this. -
-
-
thats just insane prema!
-
-
HaloGod2012 Notebook Virtuoso
just in case anyone wanted to see my 5700HQ running at 3.7ghz, scores over 12k in firestrike. Had a gaming session last night for about 4 hours straight andlogs show nothing under 3.7ghz. I guess I'm very lucky with this chip.
http://www.3dmark.com/3dm/8777943?Ethrem, Prema, hmscott and 1 other person like this. -
wow thats a golden chip indeed and u can consider urself incredibly lucky! @D2 Ultima, take a look at this!
Ethrem likes this. -
http://www.3dmark.com/compare/fs/6133190/fs/6031297 -
GTX 980 launched for notebooks
Discussion in 'Gaming (Software and Graphics Cards)' started by Cloudfire, Sep 22, 2015.