This is only natural, they do have differing architectures, afterall. However, the reason you see greater performance from a lower number of SPs in NV GPUs is because they are clocked significantly higher. This doesn't mean ATI is being disingenuous about the quantity or even performance of their SPs.
Tell it to the OEMs supplying systems with the 7900 GS then![]()
Market segment is determined by performance/price targets at time of introduction, not when the next generation of parts has been released.
It will depend on the bottleneck of each game. If there is a pixel fillrate limitation, the 7900 GS will still outperform the 8600M GT and MR HD2600 XT. If a math (i.e. shader) limitation exists, the newer parts should outperform the 7900 GS.
No, but generally speaking an MPU/ASIC manufactured on a smaller process will have lower power consumption than another MPU/ASIC with similar clockspeeds and transistor count on a larger process.
Pixel fillrate is only a non-factor when running low resolutions (i.e. sub 1 megapixel). Since most laptop displays feature native resolutions of 1.7 megapixel or higher, ROP/RBE throughput is certainly still a factor in gaming performance.
The HD2600 XT and 8600M GT? So far, yes. That's why I said it would be interesting to re-visit this topic in a few months once drivers for each have matured.
-
-
masterchef341 The guy from The Notebook
i have 10 bricks that need to be moved. I can pay 2 people $5 each to carry five bricks in a satchel, and it will be done in one trip.
or i can pay 10 people $1 each to carry one brick. these people don't have to be quite as strong, individually.
but as a group, both sets are equal in strength, both sets moved 10 bricks in one trip, and it cost me $10.
Now I turn brick moving into a business, and I want to sell it. Business A lists that they have 2 brick movers, whereas Business B lists that they have 10! and both cost the same!
Neither company is being disingenuous, but the industry standard statistic "brick movers" is not the number I really want to know when choosing which business to move my bricks.
so who is better, ati or nvidia?
exactly.
I'm not too concerned about the OEMs. moreso it is important for the consumer to make the right decision than for an OEM to offer or not offer a product.
maybe. does that mean my geforce 256 is a high end part? it was (past tense), but is it still? If I were selling geforce 256 cards, what market segment would I be targeting? when they were introduced, they were high end. you tell me.
I guess we can wait and see how this pans out.
that is absolutely true. generally is a really important word, though. rushing a new fab process can introduce new kinks, new voltage leaks, etc. apparently the desktop versions (hd 2600 / 2600 xt) are hotter but less power hungry than the geforce 8600 gt / gts. thermal limitations are just as prevalent in laptops as power limitations.
so is it better to combat heat or combat power draw? ehh...
maybe... maybe... can you back up those numbers with evidence? sources?
well, they aren't equal yet. look at the horrible ati FEAR benchmark. I would expect them to equalize as drivers mature on both sides.
This would be a good time to note that you were wrong about your ROP count. its only 4 for the ati parts vs 8 for the nvidia parts. does that make nvidia better? maybe. maybe ati has 2x as efficient ROP's as nvidia.
we will see. -
Your analogy is not applicable to this scenario. As parallel a workload as graphics rendering is, there are still dependencies that make performance scaling non-linear in everything but the best-case scenarios. Otherwise we would see SLI and CF delivering perfect 100% performance increases in purely GPU-bound scenarios, which hardly ever happens.
Absolutely. I was just making a funny about the tendency of OEMs to continue to offer last-gen products even when the current generation is far more compelling, at least from a price/features/power&heat perspective.
In terms of market segment it is. Market segmentation only refers to performance against parts available during the intended lifecycle of the product, and usually only within the company's own product range.
One begets the other, hence the use of the metric "TDP" when discussing either.
Not without the use of purely synthetic performance monitoring tools (such as Nvidia's PerfHUD, ATI's GPU PerfStudio & GPU Shader Analyzer, and MS' PIX) on a per-app (or even per code segment) basis.
Indeed.
Pardon, I was looking at the TMU count for RV630. You are correct. -
masterchef341 The guy from The Notebook
my analogy was only targeting the different unified shader processors on the gpu's, not the graphics rendering process in its entirety. i wasn't clear about that, but i think that makes it perfectly applicable.
----
if your definition of market segment is segment on inception only, then market segment is not a useful label. it doesn't matter that my geforce 256 is a high end card- or heck my geforce fx 5950, which was really expensive and totally crap compared to the ati offerings of the time. all that really matters right now is how much performance you can cram into an x watt package. -
I do realize your analogy was meant to be a direct comparison of SPs between ATI & NV GPUs, but the analogy as such is not equivocal. It would be better to describe the situation with the same ratio of brick movers as SPs, if not the exact same numbers. So 160 vs 32 (or some derivative thereof that provides the same ratio). Then you would have to compare the speed with which each set of "brick movers" is capable of moving bricks (i.e. the clockspeed of the SPs). Granted, we can't have brick movers moving bricks billions of times per second, so it would be better to use a ratio in this case, say one per million, and change the unit of time to hours perhaps. Then we would have 160 brick movers moving 700 bricks each per hour vs 32 brick movers moving 950 bricks per hour. Clearly the ATI brick movers win this benchmark.
The problem here is that this is an absolute best-case scenario and likely never to be experienced in a real-world app. Some efficiency co-efficient has to be introduced, but that co-efficient varies from app to app and even from frame to frame. That's why we need advanced tools such as the aforementioned PerfHUD, GPU Shader Analyzer, & PIX and would have to break down the performance analysis on a per-shader per-app basis to get a true idea of the performance of each IHV's shader implementation.
I'm only going by the definition the IHVs themselves use. And for that matter, the definition of market segmentation for any product used by any company. -
masterchef341 The guy from The Notebook
the point of the analogy is to simplify the problem. i don't have to use the exact same numbers or ratios. the point is, that both efficient shaders (nvidia) and increasing quantity of shaders (ati) are both means to the same end.
you are missing a HUGE point- ati and nvidia shaders are fundamentally different
clock for clock, one ati shader is not equal to one nvidia shader. one clock through an nvidia shader manipulates more data than one clock through one ati shader. that is why when you multiply a number of shaders x shader clock for both ati and nvidia, you get a much larger number of "shader clock cycles" for ati but the performance is the same for both ati / nvidia. that was the premise of my whole point.
i looked up market segment on wikipedia. there is no arguing with wikipedia. a market segment is defined by buyers as a homogeneous group. if "high end gaming card" is how we view the geforce 256 overall, it is a high end gamer market card. if we don't, its not. read up. -
I'm not missing the point. Refer back to my first post on this page:
There is no evidence that NV SPs manipulate more data per clock cycle than their ATI counterparts (other than for efficiency reasons which vary from shader to shader and app to app), unless you care to cite some.
As for the performance difference there are two reasons:
1) efficiency - NV SPs have fewer dependencies than ATI SPs in most code because of their purely scalar architecture that allows each SP to operate on a single component of a pixel
2) when given an obvious shader performance discrepancy and little performance difference, this implies the bottleneck lies elsewhere. Again, without the use of the tools I mentioned earlier it is nigh impossible to know with a certainty what that bottleneck is, except in corner cases (like extremely high-res + high AA sampling level) -
masterchef341 The guy from The Notebook
i just want to let you know that whether or not i link you to a source of information doesn't affect whether or not that information exists. its only a display of your inability to find it.
if you knew more on the subject- specifically SIMD and MIMD, you would be able to find a source in about 0 seconds using google.
i took the liberty. "8800 vs 2900 simd mimd" into google and the first result is this: it gives a pretty good explanation of what is going on.
http://www.behardware.com/articles/671-2/ati-radeon-hd-2900-xt.html -
Wow you guys actually have the patience to write so much.... well done..
-
well regardless of the 3dmark scores and whether or not it is released yet I think what is really needed are several dx10 games to be released because unless I am mistaken that is what these cards were designed for...but doesn't the ati hdmi output also handle sound?
-
Cut the condescension out. There's no call for it. If there were, you wouldn't have to use the crutch of someone else's explanation, let alone have searched for it. I already mentioned the efficiency differences due to dependencies inherent in R6xx's architecture, several times in fact. In fact, I said as much IN MY LAST POST, except I did so by means of explaining G8x's strength as opposed to highlighting R6xx's weakness.
-
Clearly what we need are cards that use cheese as power and result in 42.
-
Clearly. Which is precisely why I have decided to develop the all new Cheeseon 42 GPU! Coming to a graphics card near you this fall
-
how much???
-
I'm buying stock in your company!
-
The Forerunner Notebook Virtuoso
Well at least if you fry the card by overclocking you've got yourself melted cheese. Fondue anyone? -
masterchef341 The guy from The Notebook
hah. so, i win, right?
im sorry i felt a little condescension, but i smelled condescension first! with the "cite a source or thats not true" thing.
anyway, the gpu's are way too intricate and complex to compare specifications side by side and come up with any meaningful result. i've been saying that forever. its a cold day in hell before you can really match up any of the parts side by side and say "these are functionally equivalent and show a clear winner". im not bashing ati or nvidia; the fact is that nvidia's individual shaders are more complex and handle more data / clock, they are also clocked higher. however, ati has a massive number of total shaders. when all is said and done they have similar shader power. tada.
you are the one who asked for a source. i figured the time for explanation had left.
don't ask me for a source and then accuse me of relying on sources as a crutch, and i won't be condescending.
im gonna go find a good cheese cooling solution for my new cheese 42 gpu. bbl. -
Why, $42, of course
Oooh, you should be working for the marketing department with ideas like that. Want a job? -
ROFL u guys make me laugh. this is a very entertaining thread. and has some useful info too.
-
$42 plus the possibilty of melted cheese, what a sweet deel
-
i prefer swiss cheese to american cheese. cheddar is definitely the best cheese out there. and i wish i had $42.00 im poor
-
hmmmmmmmmmmmmmmm.....Cheese...*drools*
-
The Forerunner Notebook Virtuoso
I'm listening...for years my ideas for cheese powered electronics have been scoffed at...fools will pay, THEY'LL ALL PAY! -
OMG this thread has gone WAY off topic. ROFL!
-
OOh, disscussion about cheese. Sweert. I like gouda and a swedish cheese called "gräddost". While that translates the "cream cheese" it isn't.
-
I noticed the ATI HD 2600 is listed in Fujitsu's 6460 17" notebook. That may be a sign (along with it in the HP "Dragon" system, and so far no systems of less than 17") that it may be of an MXM III size chip/card as opposed to a MXM II.
(Of course since all MXM cards are backwards compatible with previous MXM models they may be size II cards/chips).
http://store.shopfujitsu.com/fpc/Ecommerce/buildseriesbean.do?series=N6460
Guess we will just have to wait and see if it comes out in any 15.4" systems, but as of right now it looks like it is competing with the 8700 in 17" and above systems. -
masterchef341 The guy from The Notebook
thats not a good thing for the hd 2600. i really doubt its going to be restricted to 17"+ laptops for long though.
-
Ok, so I scrolled from the second page to the last and suddenly we are talking about Cheese. Cheers to fabarati 'cause he spelled Gouda right. As a Dutchman, I fly on cheese.....I react to it much like Homer Simpson does to beer....well...anyway
Back to the topic. ATI and NVIDIA always are in a neck to neck race. Every part will have their counterpart and I don't think both will outrun eachother much.
More interesting to the notebook user is which one will give you more battery life.......any thoughts on that???? -
In the notebook midlle and low end, ATI and nVidia are/were euqal. High end nVidia rules supreme.
-
Well seeing as the HD 2600 is now available in iMacs, it's only a matter of time before we see it in the MBP and other 15" form factors.
-
Ah, but those aren't mobility radeon (at least they're not called that).
-
The MBP just had its refresh not too long ago.
I personally would highly doubt they would go through the effort to add a new card that doesn't show a high performance boost.
Last refresh put the MBP into Santa Rosa and DX10, changing just the card to a ATI wouldn't be a sound business choice IMHO. -
Actually you're right, they probably wouldn't put it into the MBP. However, Apple generally uses mobile cards in their iMacs, so I think we should be seeing it in 15" models soon.
-
Shadowfate Wala pa rin ako maisip e.
Ummm, Has anyone tried to test the ATI Mobility Radeon HD 2600 yet?
-
no, except from shady sites on the internet or general testing of the card
so nothing said here means anything as no one has done an in depth review on the performance of the card -
Not to mention that the previous pages were more about cheese...
-
I have a Toshiba A200S with the ATI Radeon HD 2600 (256MB), I scored ~2732+/- on 3dmark06.
Window Experience Index scores 4.7 for both Aero and Gaming performance.
I will provide screen captures later tonight of the official 3dmark06 score.
I hope that provides some help... -
Yes it does help , thanks!
. If you could tell us the clock speeds and drivers that you are using that would also help. Initially though the scores are close to the 8600gs GDDR3 and 8600gt GDDR2. Although thats synthetic benchmarking I consider that a good score regardless of the clocks, especially considering thats the non-XT version.
-
And please update your drivers to sth based on Cats 7.8 instead of default drivers (optimizations for HD 2x00 series
)
-
These are my specs...
Toshiba Satellite A200S
Intel Core 2 Duo 2.0GHz, 4MB L2, 800MHz FSB
2.0GB (2x1GB) DDR2 PC5300
160GB 5400RPM SATA HDD (Primary)
120GB 5400RPM SATA HDD (Secondary)
ATI Radeon HD 2600 256MB
15.4" TruBrite Display
DVD Supermulti DL Drive w/Label Flash
Intel 4965 802.11 a/b/g/n WiFi
1.3M Web Cam
Finger Print Reader
Windows Vista Premium 32bit
6 Cell Li-Ion Battery
(NOTE: This 3dMark05 result is with ATI's Optimal Quality Setting)
(NOTE: This 3dMark06 result is with ATI's Balanced Setting)
(NOTE: This 3dMark06 result is with ATI's High Quality Setting)
(NOTE: This 3dMark06 result is with ATI's Optimal Quality Setting)
Because 3dMark05 and 06 freeze at the splash screen, I had to remove the Direcpll.dll file from %WINDIR%\system32\Futuremark\MSC\ in order to allow the benchmark to run. This is required due to a futuremark conflict with the latest drivers from ATI for the HD 2600 and HD 2900 video cards. I have the latest ATI drivers installed (version 7.8 as of 8-13-2007).
FEAR with Auto Detect GPU/CPU Settings(1024x768):
FEAR with Maximum GPU/CPU Settings(1024x768):
-
ShadowoftheSun Notebook Consultant
Not bad. Was that done at 1200x800 or 1280x1024? I assume the former because it says you didn't run at recommended settings, but you never know
-
It was at 1280x800
-
What were the memory and core clocks on your computer?
-
Another one with Toshiba A200-170 (C2D 1.8/ 1GB RAM/HD 2600 256MB)
3D Mark 03- 7280
3D Mark 05- 5670
3D Mark 06- 3037
Half Life 2 Max Details 1280x800- 89fps
Quake 4 benchmark 1280x800- 52fps
Details including more test with AA/AF: http://www.netzwelt.de/news/75934_4-performance-preiswert-toshiba-satellite-a200.html -
So it's 2 times a 8600GS huh?
Is it DDR2 or GDDR3?
I just look at toshiba's website. The price is nice ($1700CAD) BUT the Canadian version has a T54xx rather than a T7100 and a 250GB HDD. The US version is customizable like you want from the website
It scores only 3000pts, but with a T7400 or 7500 I'm sure it would be in the high 3000's or even maybe hit the 4000 mark -
Toshiba is GDDR3 as far as I know.
Still don't think it's gonna go anywhere near 4k- that sounds more like HD 2600XT.
What I'm interested in the most is overclocking- it's 65nm after all- that can help a lot. Since HD 2600 and 2600XT is the same card with different core clocks (memory probably differs) you can hope for high overclockability. -
What's interesting to me is the clock speeds (and to a lesser extent, the WEI).
***The following is all speculation...***
(yeah, ok...I shouldn't even talk about the WEI, so sue me
)
The dude from HK with the HP 8510p got a Graphics score of 5.9 and Gaming of 5.1, quite a bit higher than the dual 4.7 seen in this thread. Also, the Spanish preview of the 2000x series from May lists 5.9 and 4.9 as the scores.
Drivers maybe? Or, DDR2 vs. GDDR3, OR clock speeds.
The clock speeds for the HD 2600 are listed at 400-500mhz / 550-600mhz. I see the Toshiba from the last page was clocked at 400mhz. This could mean it's the DDR2 version OR Toshiba just clocked it at the very bottom.
In that case, 3000 in 3dMark06 with bottom-barrel clocks (and maybe even DDR2) isn't bad.
The Spanish HD 2600 review lists the DDR2 version at 512mb, so maybe it only comes in that size. In which case the Toshiba should be GDDR3, with lots of room to clock up (hell, Notebookcheck of all people list their clocks at 500/600 and the Spanish review was running at 450/700!)
Anyway...very interesting :O
Spanish review (May) http://www.chilehardware.com/Articu...ATI-Mobility-Radeon-HD-2600-200705141810.html
Notebookcheck (sucks, but meh
)
http://www.notebookcheck.net/ATI-Mobility-Radeon-HD-2600.3771.0.html -
Yea if the toshiba is really running at 400mhz only then those scores are very very good. I think its pretty safe to say the HD2600 is better than the 8600GS if both were at the same clock speeds (600core/700memory), especially considering this is Toshiba who is notorious for underclocking GPUs... pulling 28FPS in F.E.A.R at max and 2900 3Dmark06 is very good (the asus F3 could only get 24FPS F.E.A.R at same settings and 2200 3Dmark06). Although its really not a big enough difference in performance to make it a main purchasing decision, id be more interested in the temperatures these cards run at and which has longer battery life capabilities.
Now to wait to see if a 2600XT can fit in a 15.4" laptop
-
My Toshiba A200S is running the ATI Radeon HD 2600 at a Core Clock of 500MHz, and the Memory Clock is 400MHz.
-
OK you have to tell me how to update to the latest drivers.. I tried to use some atitool that would let me use the latest downloaded drivers from ATI for my mobile HD2600 but it didn't recognize my card. I have a fujitsu n6460 with HD 2600 clocks: 500 core/600 memory..
right now im using default drivers that was provided by fujitsu..
Oh and any way to overclock this baby yet?
ATI Mobility Radeon? HD 2600 vs. Nvidia Geforce 8600M GT 512MB
Discussion in 'Gaming (Software and Graphics Cards)' started by AdamW, Aug 3, 2007.