Well, I told you. You wont see HBM MXM module unless nGreedia says so. BTW do you notice the TDP - 95W?
-
-
What about it?
-
It's way above that in the current implementation (~125W).
-
Which implementation is that and where does it say 125W?
-
Check the data sheet.
-
ah i think triturbo is hinting at binned chips with higher efficiency vs. previous m295x models
Sent from my Nexus 5 using Tapatalk -
That's the same specs as the BGA M295X in the Retina iMac, right down to the clocks, TDP, and FLOPS. This one is 3 TFLOPS meaning it's clocked at 725 MHz and VRAM likely lower at 1250 MHz as well, hence the 95W TDP. Basically the same underclocked M295X as in the Alienwares but in MXM instead of BGA.
-
Yet it runs pretty hot in the iMac where the 780m was running a bit cooler (still not comfortably cool, but that's iMac for you).
-
That's because the 780M is MXM and 100W TGP according to @Prema, and the iMacs that housed the 780M were the much thicker non-Retina ones.
-
You managed to hit 1700MHz memory on a 980M? I've seen about two people alone (yourself included) to find someone who can reliably hit 1500MHz, I want to see this.
-
HaloGod2012 Notebook Virtuoso
my 980m is sitting at 6.4 with no issues...haven't tried to go further. I bet it could since 6.4 has been 100 percent stable for weeks. -
You have a 980M at 1650MHz on memory?
Can I see a firestrike? -
Out of curiosity does your card use Samsung VRAM? I wasn't even able to get 1300MHz from my Samsung cards. :\
-
What!? I can't even break 6Gbps vRAM. The last three 980m's I've tested none of them have been able to break 6000MHz.TomJGX likes this.
-
Yeah exactly why I'm asking for proof.
I think people are mistaking hitting 1500MHz and for 7000MHz effective or something. I know Jaybee managed something like 1508MHz in the P750ZM thread, but that's a very far cry from 1700MHz. -
Most were from like 5600-5800MHz (1400-1450MHz)
10-15% is expected, but much more than that, then you're talking premium silicon. It would be nice to be able to adjust vRAM voltage a bit though to push it a bit more. But in most cases, at least for mobile GPU, not sure if vRAM really is much of the limiting factor. 5000MHz is probably the "sweet spot" for mobile GPU's in the same way 1600MHz DDR3 RAM is the current "sweet spot" for system RAM. -
can you please explain what MHz do you state in parentheses stand for ? 5600-5800MHz is CPU right ?
-
Usually, cards come with one of three kinds of memory today. GDDR3, GDDR5 and HBM (High Bandwidth Memory). They also have a memory clock, and a memory bus. All of these variables combine for the memory bandwidth. Just like clock speed, more memory bandwidth is a good thing. Games do not usually benefit much from increased memory bandwidth though, so don't expect huge gains from overclocking memory in most games. Some games do, but I don't remember any of their names off-hand.
HBM is different from GDDR3 and GDDR5 in mostly physical ways. Calculation-wise it's very similar (as I expand on below), and thus I am not giving it its own section. HBM 1.0 (currently on R9 Fury and R9 Fury X cards) is limited to 4GB. HBM 2.0 will not be. Since googling about HBM provides many articles explaining how it works physically, I will defer to those rather than attempt to explain it again here (if you've noticed, I did not explain about GDDR3/GDDR5's physical makeup more than was necessary).
Your memory clock is represented in multiple different ways. There is your base clock, which is usually an exceedingly low number. nVidia gaming-class cards since Kepler (GTX 600/700 series) came out have had a standard of 1500MHz for the desktop lineup in terms of memory speed. AMD has been using less for the most part (with 7000 and R9 2xx series) but has bumped the speed recently (R9 3xx series). This clock speed is not what you're going to be too concerned with; you should be concerned with your effective memory clock. Your effective memory clock depends on the type of video memory you have, which I will explain below:
- GDDR3 memory (which you won't find in midrange or high-end cards these days) doubles that clock speed. So a card with 1500MHz memory clock using GDDR3 RAM will have a 3000MHz effective memory clock.
- GDDR5 memory (which you will find everywhere in midrange and high-end cards these days) doubles GDDR3's doubler. In other words, it multiplies the clock speed by 4. So a card with 1500MHz memory clock using GDDR5 RAM will have a 6000MHz effective memory clock.
- HBM memory (only present in two cards right now) also doubles the clock speed, similarly to GDDR3. So a card with "500MHz" memory clock (like this) will have an effective memory clock of 1000MHz (despite that link ironically claiming the effective clock is 500MHz).
Now there are three ways one usually reads the memory from a card with GDDR5 RAM. Let's use the GTX 680 as an example. Some programs and people list the actual clock, which is 1500MHz. One such program is GPU-Z. Other programs list the doubled clock speed; which would be 3000MHz. Those programs are often overclockers such as nVidia Inspector. MSI Afterburner also works on the doubled clock speed, though it does not list the clocks themselves. Then finally, the effective clock speed is often seen in sensor-type parts of programs; such as GPU-Z's sensor page. Please remember which clock your program works with when overclocking. If you want to go from 6000MHz to 7000MHz for example, you would need +500MHz boost in MSI Afterburner.
HBM is read by both GPU-Z and MSI Afterburner at its default clock rate, and is indeed overclockable via MSI Afterburner (though not by Catalyst Control Center). I am unsure of other tools people use for AMD OCing that aren't Catalyst Control Center or MSI Afterburner, but there is a chance it may be OC-able by other programs.HTWingNut likes this. -
Okay makes sense
-
HaloGod2012 Notebook Virtuoso
Here you all go, 6400mhz. I'm betting I can go higher, but don't care to
http://www.3dmark.com/3dm/8741117? -
4GB model huh... either way you're lucky. I've never seen a 980M in any system hit that kind of vRAM.
Think you could show us a picture of your chip?
Also, wow, that physics score for a 3.5GHz chip. My 4800MQ did what, 10413 or so? I wonder if broadwell's L4 cache benefits physics in firestrike some... -
HaloGod2012 Notebook Virtuoso
Yes I can get a picture , need to add an ssd again soon anyways. The 5700hq has been awesome , overclocked to 3.7 I can do over 12k score on the physics . Best part of this chip , NO throttling at all ....unlike the horrible 4700, 4710, and 4720hq chips I dealt with last yearTomJGX and moviemarketing like this. -
You're lucky. From what I can see they're still TDP limited, so you must've gotten a good chip.
If you break 12K score on 3.7GHz on that chip, you're definitely being helped either by something. I'm alluding it to the L4 cache. Looks like Broadwell is better than Skylake for the hardcore benchers... Broadwell-E might be the best benchmark CPU instead of Skylake-E due to that.
Well, if skylake can clock higher it might be better to use Skylake... 5960X chips can pull 400W+ on their own; broadwell usually draws more TDP than haswell too; a broadwell 8-core might be monstrous and hot. -
How about a GPU-z screenshot? Not that I don't believe, but I believe GPU-z more than Futuremark.
-
OHNOZ! noob mistake, argh...
D2 is right, I was thinking along the lines of: 1253 Mhz stock vRAM + 510 Mhz offset = 1763 Mhz *4 = 7052 Mhz effective speed
what i didnt consider: Nvidia Inspector calculates the offset based on 1253 *2 = 2506 Mhz + 510 Mhz = 3016 Mhz *2 = 6032 Mhz effective!
sorry for the confusion guys!
in any case, heres a screenie:
Prema likes this. -
HaloGod2012 Notebook Virtuoso
Yes i can provide a gpuz screen shot when i am back home today. Im surprised this is such a big deal, i must have some amazing samsung memory on this gpu -
HaloGod2012 Notebook Virtuoso
one thing i noticed about the big MSI laptops, they definitely tweak something to stop the throttling. My GT80 titan with the 48XXHQ was fine, and my gt72 is also fine. Any other model had throttling. -
Good ship or good laptop. I just tested the Gigabyte P55W with i7-5700HQ and it power throttled constantly in XTU and rendering video and thermal throttled in any game and 3DMark.
-
The amount of vRAM has a strong impact on OC-ability.
The 4GB GTX980M vRAM clocks a lot higher than the 8GB.
Many of the 6GB 970M already do stable 3000/6000Mhz+, while very few 8GB 980M run stable at that speed.
Here my AW 18 beta vBIOS in SLI with two Clevo cards:
http://www.3dmark.com/3dm11/10341773
Last edited: Sep 30, 2015 -
FutureMark SystemInfo polls the GPU in the same manner as GPU-Z, so it's not as if either is inherently more or less reliable
-
I've had them be completely different.
-
In the recent past or a long time ago?
-
futuremark has the issue of usually polling the clocks when the hardware is at idle, so u often get way lower readings
-
I haven't had this issue since SystemInfo was patched a couple years ago
-
HaloGod2012 Notebook Virtuoso
-
Why is that ? And why your default clock is almost 500mhz higher than the guy's above me if he got 980 and you got 970 ? Shouldn't it be the opposite ?
-
Because more GB are always harder to OC. More traces, more load on the board and 100% higher chance that one of the vRAM chips can't keep up.
The GPU above your post is idle and does nothing while the other one was running a 3DMark bench (see vRAM speed in the benchmark result). -
HaloGod2012 Notebook Virtuoso
I already ran a firestrike bench and posted all the proof, so those clocks I posted are bench and game stable -
I simply answered his question...
-
I've seen a whole lot of people running into their TDP limits when they push their chips on GT72 and GT80 machines, so I don't believe they've fixed it; I'd need to put them under a ton of stress to believe that.
What I DO believe is that some people get decent chips with decent voltages and almost never pass their TDP limits. My 4800MQ at 3.5GHz and 0.9976v for example almost never passes 47W unless I render something (hits 50W), livestream (similar W usage) or run something like Linpack (hits ~84W). A 4xxxHQ chip that comes with something like 1.15v at stock is far more susceptible to it. I'd just say you're lucky, and enjoy your luck.HaloGod2012 likes this. -
My 5700HQ was 1,147V stock (as you know). Undervolting -100 mV to 1,047V removed the throttling in XTU but only reduced it a little in everything else.
-
-
thats just insane prema!
-
-
HaloGod2012 Notebook Virtuoso
just in case anyone wanted to see my 5700HQ running at 3.7ghz, scores over 12k in firestrike. Had a gaming session last night for about 4 hours straight andlogs show nothing under 3.7ghz. I guess I'm very lucky with this chip.
http://www.3dmark.com/3dm/8777943?Ethrem, Prema, hmscott and 1 other person like this. -
wow thats a golden chip indeed and u can consider urself incredibly lucky! @D2 Ultima, take a look at this!
Ethrem likes this. -
Jeez... Your physics score is crazy close to my 4.7GHz 4790k.
http://www.3dmark.com/compare/fs/6133190/fs/6031297 -
Its only Firestrike though. I bet my 4790k would trash it in 11 and Catzilla
GTX 980 launched for notebooks
Discussion in 'Gaming (Software and Graphics Cards)' started by Cloudfire, Sep 22, 2015.