The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    Intel X3100 users rejoice! It's finally here! New Pre-Beta Drivers

    Discussion in 'Gaming (Software and Graphics Cards)' started by epictrance4life, Jun 7, 2007.

  1. addman

    addman Notebook Guru

    Reputations:
    0
    Messages:
    60
    Likes Received:
    0
    Trophy Points:
    15
    Ok! 2 questions about the X3100: 1. Does Rome: Total War run WELL on it? 2. For best graphics performance do you recommend running Vista or XP? Cheers!
     
  2. Yakumo

    Yakumo Notebook Consultant

    Reputations:
    5
    Messages:
    130
    Likes Received:
    0
    Trophy Points:
    30
    no, it doesn't run at all.
    The box for the game specifically mentions the intel 4500 as a working exception to their requirements, but the older Intel kit is a non starter.


    edit: thinking of the newer one, sorry, ignore the above.
     
  3. addman

    addman Notebook Guru

    Reputations:
    0
    Messages:
    60
    Likes Received:
    0
    Trophy Points:
    15
    That´s weird...Rome: Total War was released like 4-5 years ago. I don´t think the x3100 even existed then, let alone the 4500 :)
     
  4. dukka

    dukka Notebook Consultant

    Reputations:
    11
    Messages:
    194
    Likes Received:
    0
    Trophy Points:
    30
    Why it can not run? I have personally tried this on my laps, it ran just a little laggy on medium setting, I've just deleted it cos I can't get to the turn based part of Total Wars series. if I still keep it I can give u a screenshot.
     
  5. addman

    addman Notebook Guru

    Reputations:
    0
    Messages:
    60
    Likes Received:
    0
    Trophy Points:
    15
    A screenshot would be nice thank you :)
     
  6. Teh N00b

    Teh N00b Notebook Geek

    Reputations:
    20
    Messages:
    81
    Likes Received:
    0
    Trophy Points:
    15
    For Windows 7, which is better to use? The supplied drivers or the Vista drivers? I'm sorry if this has been asked before but I can't be bothered looking through each page to search for an answer (albeit, possibly not as applicable as it is now, since new drivers come out).
     
  7. dukka

    dukka Notebook Consultant

    Reputations:
    11
    Messages:
    194
    Likes Received:
    0
    Trophy Points:
    30
    I've left my install disk at my hometown so sorry, but I have found a video on Youtube which show playing Rome Total Wars on x3100 here, otherwise u can download the demo and try yourself, but I'm sure it's playable :D
     
  8. addman

    addman Notebook Guru

    Reputations:
    0
    Messages:
    60
    Likes Received:
    0
    Trophy Points:
    15
    Yeah I saw that one too, excellent.
     
  9. adyingwren

    adyingwren Notebook Evangelist

    Reputations:
    77
    Messages:
    658
    Likes Received:
    0
    Trophy Points:
    30
    I can confirm that it WILL run on an old 945 or older chipset. I've got a 5 year old desktop sitting around here with integrated graphics and it plays rome half-decently.
     
  10. rmcrys

    rmcrys Notebook Guru

    Reputations:
    1
    Messages:
    68
    Likes Received:
    0
    Trophy Points:
    15
    For games Vista drivers (as most DX games fail to run on W7), else keep W7 which are faster.

    X3100 (intel IGPs) sucks for games! I have a desktop media center with nvidia´s NForce 630i (with Geforce 7100 IGP) and it´s much faster. Not only the 7100 IGP is about 2x faster itself but nVidia drivers really live up to the task on games (which Intel drivers don´t). Of course my laptop is mainly for work but it wouldn´t hurt to have 2-3x more 3D power and full Photoshop acceleration (it crashes when using X3100 on my setup).

    PS: even Nvidia´s Ion plaform gives another breath to the (sloooow) Atom CPU :)
     
  11. Yakumo

    Yakumo Notebook Consultant

    Reputations:
    5
    Messages:
    130
    Likes Received:
    0
    Trophy Points:
    30
    sorry! I don't play the War games, i got the names mixed up, I was thinking of the new one, I was at a friends just a couple of days before and their housemate asked me to fix the game for them as they'd just bought it and it didn't run, and that was written on the back.
     
  12. 7oby

    7oby Notebook Evangelist

    Reputations:
    151
    Messages:
    457
    Likes Received:
    0
    Trophy Points:
    30
  13. diegorborges

    diegorborges Notebook Enthusiast

    Reputations:
    1
    Messages:
    46
    Likes Received:
    0
    Trophy Points:
    15
    I hope Intel don't forget us and only think in G45 people... :(
     
  14. dukka

    dukka Notebook Consultant

    Reputations:
    11
    Messages:
    194
    Likes Received:
    0
    Trophy Points:
    30
    I think it is happening. May be x3100 is nearly at its top already [sigh]
     
  15. diegorborges

    diegorborges Notebook Enthusiast

    Reputations:
    1
    Messages:
    46
    Likes Received:
    0
    Trophy Points:
    15
    No news buddies? :(
     
  16. stanny1

    stanny1 Notebook Consultant

    Reputations:
    2
    Messages:
    152
    Likes Received:
    0
    Trophy Points:
    30
    Where can I download previous versions of the driver? I updated to the latest one and it keeps crashing.
     
  17. nicocarbone

    nicocarbone Notebook Enthusiast

    Reputations:
    1
    Messages:
    45
    Likes Received:
    0
    Trophy Points:
    15
    Hi everybody. I have some info I want to share with you.

    The last week Ubuntu 9.04 was released to the public. Among many nice improvements, this new version of Ubuntu brings an updated Intel graphics stack, with full OpenGL 2.0 support and DRI2. It is difficult and mostly irrelevant to explain what DRI2 does exactly, but the important thing is that is not enable by default, because it is not completely stable yet and is far from being optimized. In fact, the whole Linux's Intel graphic stack has been reworked and is mostly "unpolished". (It is important to now than this drivers are done by the comunity, not by Intel, and they are not in this state because a lack of work, they were completely rewrited in order to make them more modern and clean, and the work is not finished yet) Here is an insight in how the linux drivers are: http://www.phoronix.com/scan.php?page=news_item&px=NzIyMA

    Despite all of this, I thought it would be a good idea to compare Linux graphics performance versus Windows 7 with the X3100. I updated the Linux kernel to 2.6.29.1 and installed the latest development drivers from the launchpad. I activated GEM and DRI2, and the test were done with Compiz ON.

    I downloaded Nexuiz, an open source FPS game based on the Quake engine ( http://www.alientrap.org/nexuiz/) and used it as a benchmark (It has Windows, OSX, Linux and Linux x64 binaries)

    I tested with 1280x800 resolution, using the Normal, Medium and Low in-game presets. The benchmark is done by entering "timedemo demos/demo1" in the game's console. Here are the results (min/avg/max):

    Linux x64:
    Normal 5/10/17 (there were rendering artifacts in Normal preset)
    Medium 9/16/32
    Low 19/27/44

    Windows 7:
    Normal 5/11/26
    Medium 6/14/38
    Low 12/21/44

    As you can see, despite the highly buggy and "experimental" nature of the Linux drivers, the performance is equal or better than under Windows 7. Only in Normal preset the average FPS is larger in Windows than in Linux, but in this case the image is not properly rendered under Linux (apparently there are still some bugs in the shading language of the Linux drivers).

    I don't now exactly if there are others factors involved, but it seems that if the buggy and experimental linux drivers have better performance than the windows' ones, there is still a lot of work to do in these drivers. What do you think?
     
  18. IntelUser

    IntelUser Notebook Deity

    Reputations:
    364
    Messages:
    1,642
    Likes Received:
    75
    Trophy Points:
    66
    The Phoronix guys say Linux driver development is better comparably than Windows. Intel does have a team working for it, I think the group is from Tungsten Graphics. You should test out XP or Vista not Windows 7.

    You have to realize at some point the limitation is at the hardware(not necessarily graphics hardware itself but anything related to it), not drivers. Could be that Linux OS itself is that much better.

    http://www.anandtech.com/mac/showdoc.aspx?i=3435&p=13

    The biggest thing that perplexes me is why the X3100/X4500 performs so horrid in low settings. You want to lower the settings to get more playable experience, yet doing that does zero! I think its really that the FSB-based system is a bigger limitation than JUST the CPU aspect. At low resolutions its probably CPU that's bottlenecking the GPU but with GPU sapping all the bandwidth, the CPU can't perform.
     
  19. nicocarbone

    nicocarbone Notebook Enthusiast

    Reputations:
    1
    Messages:
    45
    Likes Received:
    0
    Trophy Points:
    15
    I agree, Linux driver development is generaly better then Windows, but my point is that, even now when the new driver architecture is not fully tested and has lots of performance regressions (there are lots of Phoronix articles about this), Linux drivers performs better than Windows 7's ones. And you are right, Tungsten Graphics is "working" for Intel developing Linux drivers, but they are open source.

    About testing XP or Vista drivers, I was using latest Windows Vista x64 drivers under Windows 7 x64, with is the better combination I've found, at least in DirectX games.

    I am sure that the hardware is a big limitation, but what I was trying to show is that Windows drivers are far from being fully optimized. And yes, I do think that Linux OS is better overall, however I don't understand what was your point with the AnandTech link, other than make me ask what OS X is doing so much better than Vista in order to have so much better battery life.

    I also have been perplexed by the unusual performance of the GMA965 under light games. In fact, I've found, for example, that COD2 under DirectX9 works almost equally bad as COD4... it makes no sense. It may be related to some bottleneck in the GPU <-> CPU communication, or to the low memory bandwidth, but IMHO, the drivers are not completely innocent.
     
  20. IntelUser

    IntelUser Notebook Deity

    Reputations:
    364
    Messages:
    1,642
    Likes Received:
    75
    Trophy Points:
    66
    I just listed Anandtech article because maybe Windows coding is really that bad. OS giving 20-30% differences in battery life is kinda ridiculous. What else is wrong? Win 7 is better than Vista yes, but the foundation is still Vista. I know in some games it performs significantly better on XP than Vista. Could be same with Win 7.
     
  21. 7oby

    7oby Notebook Evangelist

    Reputations:
    151
    Messages:
    457
    Likes Received:
    0
    Trophy Points:
    30
    I disagree.

    The linux drivers have a public bugtracking system, which anybody can use to raise awareness of a particular bug. Depending on how qualified your bugreport is and in case it's a regression if you bisected the sourcecode, your chances are pretty high that your issue gets resolved. I reported numerous blocker bugs to the intel Linux team and each one got resolved. In the Windows domain it's much harder to get intel's attention, but actually I also got one issue resolved.

    So let's compare features: Intel Linux drivers have tearing in Desktop 3D mode. They hacked something up for the 2.7.0 release in mid April '09 which resolves tearing for some people, but only during video playback. The composition manager still has tearing. Jesse Barnes had another/same hacked version back in February '09 ( click and read "painflipping"), which resolves tearing even in general, but I don't know whether it really entered mainline code. In Windows Vista Aero I had tearingfree display from day zero! Suspend/Resume is broken in most combinations of kernel/drm/gem/xorg/intel driver. I never had suspend/resume issues in Windows. MPEG-2 support (broken), Tiling (partially broken), VT switch (partially), no video overlay, ... in Linux. The list is very long and if you tracked linux driver development over the last year, it was horrible broken: e.g. at each point of time there were at least 3-12 features broken.

    It boils down to:

    "In the Linux open source Intel driver world, you can solve problems you would never have in the Windows domain."

    Let's talk about internals: The core feature intel has been working on in the Linux world was a memory manager for a GPU (GEM). When Keith Packard (lead architect for Linux intel driver) presented GEM mid 2008 he was relieved that the attending intel Windows guys had something similar in Vista (= means his solution can't be that bad):
    http://keithp.com/blogs/intel_gfxcon_2008/

    Let's talk about performance: I made my point in this thread before and provided evidence that commercial OpenGL implementations (= those used in Windows and MacOS) in general perform better that Mesa. I'm lacking time to do good benchmarks on my own currenty to investigate further the numbers nicocarbone found. In theory it might indeed at some day turn out that the intel OpenGL mesa based stack is faster than the windows one - though I seriously doubt that at the moment. The mesa stack get's continuous updates though almost all changes are to mesa itself and adresses performance of mesa independetly of the hardware. The intel part of the stack actually only gets bugfixes from my observation.

    However even if the OpenGL performance in Linux is at some day greater than intel's Windows counterpart: It's meaningless. Intel has emphasized over and over again (e.g. page 18 + 20 here) that the X3100/X4500 hardware has been designed with DirectX and in particular DirectX 10 in mind. All their optimization went into the DirectX part of the driver, which is reasonable considering software game support. Intel's greatest coup in this respect is their windows 3D performance analyzer GPA, which unfortunately doesn't work yet for x3100 - only x4500.

    BTW: Does Google Earth have a benchmark mode? I'd like to compare DirectX Windows performance against OpenGL stack. I know I can approximate using the half working GPA tools on x3100.

    The numerous performance regressions -mostly 2D - analyzed by Phoronix are confirmed by Keith himself in his blog (see section "Performance Differences"). There is still a long road to go in Linux. Stability is another issue. Most real 3D apps (not just some benchmarks) crashed quite often with intel drivers ( click) in the recent time.

    Intel paid Tungsten to produce some open source code for their chipsets. In particular the 3D mesa acceleration and 2D driver code. However the intel open source team in charge of the intel Linux drivers doesn't have any relationship to Tungsten. The intel Linux driver team is on the payroll of intel and none of them ever worked for Tungsten AFAIK: keith packard, eric anholt, carl worth, jesse barnes. In fact some tungsten code such as the TMM vanished from the drivers recently.
     
  22. nicocarbone

    nicocarbone Notebook Enthusiast

    Reputations:
    1
    Messages:
    45
    Likes Received:
    0
    Trophy Points:
    15
    Thank you 7oby for your insight. I agree with in the fact that Windows drivers has more features than what Linux drivers used to have, but I'm afraid this wont be the case in the future. What I mean is, while Linux drivers are actually in active development, can we say the same for X3100's Windows driver? When I said that Linux driver development is generally better then Windows, I was mainly focusing in the perspective of developing for the (already obsolete??) GM965 and GM950 architecture.

    Performance-wise, I was convinced that 3D performance was better under Windows than under Linux, but the Nexuiz benchmarks showed something different. I know that OpenGL is no a priority under Windows, and that may be a good decision, but the Windows driver has been OpenGL 2.0 capable longer than the Linux driver, I thought it would be more polished too.

    If anyone knows of a another os-independent benchmark I could do in order to compare Windows and Linux 3D performance, I'll be happy to run them.
     
  23. 7oby

    7oby Notebook Evangelist

    Reputations:
    151
    Messages:
    457
    Likes Received:
    0
    Trophy Points:
    30
    Yes, let's throw some more games at the OpenGL implementations of Windows and Linux:

    . Quake III Arena Demo (Download Win/Linux)
    . Quake Wars Enemy Territory Demo (Download: Windows, Linux)

    Both have some timedemo benchmarks that can be used. I know you currently run x64, but using 32 Bit binaries should also give some clue about the graphics backend performance. In this case I also like that only binaries are available and have been packaged by the game companies. There are some serious performance differences if you use different compilers (gcc vs. intel). Also using the same compilers an different platforms yields in very different performance: e.g. gcc's floating point arithmetics gets translated to slow x87 FPU code on 32-Bit systems by default, whilst on 64-Bit systems by default the 128 Bit SSE registers are used (and can perform 4 single precision operations with one command). Anyway: It's okay if the game distributors used their compilers with what they think are good compiler options on the given system/platform.

    Q3A settings:
    . press ~
    . enter "s_initsound 0"
    . enter "snd_restart"
    . this disables sound output. Might interfere here.
    . enter "timedemo 1"
    . enter "demo demo001"
    . once timedemo has run you should be able to read the average fps by pressing ~ again

    QWET:
    I'm not sure about a good benchmark here. One problem is that there's no demo run included in the download. I just googled one - don't know if it's a good one:
    http://4newbies.planetwolfenstein.gamespy.com/ETQW/configs.php
    . execute etqw-rthread.x86 to launch the threaded renderer. I don't know whether it's required to parametrize it later with 'r_useThreadedRenderer "2"' for two cores. Actually maybe you don't need to worry about threaded rendering at all, because of this (threaded renderer shines on quads, but probably not on dual cores).
    . press Ctrl + Alt + ~
    . enter the commands you think are good "seta com_unlockFPS 1"
    . finally: "timeNetDemo demo_00002" to launch the above downloaded and installed demos

    Document the benchmark settings and which commands you used to execute them.

    Sauerbraten game seems to be useless for benchmarking:
    http://www.phoronix.com/scan.php?page=article&item=sauerbraten_2009_imminent&num=3

    More Linux 3D/gaming benchmark setups:
    http://dri.freedesktop.org/wiki/Benchmarking
     
  24. rmcrys

    rmcrys Notebook Guru

    Reputations:
    1
    Messages:
    68
    Likes Received:
    0
    Trophy Points:
    15
    Intel graphics are one of the most sold in the entire world and yet one of lamest drivers around. All Atom platforms use Intel 950 graphics (which is a pitty as they could at least provide X3100) aswell most low-end laptops or older Apple laptops.

    Today I was reading the new Acer nettop based on Ion platform (Atom + Nvidia 9400GM) and there you can see that Intel graphics are *very* bad, specially the driver support. Nvidia drivers provide almost every possible setting and support, directx and opengl, monitor adjustments and wide gaming support, even on low end graphics...

    -----------------------------------------------------------------------

    PS: I´m using Windows 7 build 7100 (and previously build 7068) with the built-in Intel X3100 driver and I´m having problems with W7 monitor turnoff. After 2min (or whatever the time I set for Windows to turn off my laptop´s monitor to save energy) it flashes (On - off - On) and remains ON. With Vista drivers the problem remains. Anyone knows what is the problem?
     
  25. Yakumo

    Yakumo Notebook Consultant

    Reputations:
    5
    Messages:
    130
    Likes Received:
    0
    Trophy Points:
    30
    Fortunately the atom based Ion settups have nvidia graphics.
     
  26. 7oby

    7oby Notebook Evangelist

    Reputations:
    151
    Messages:
    457
    Likes Received:
    0
    Trophy Points:
    30
    I couldn't resist to benchmark X3100 Linux vs. Vista.

    Quake3 Arena results:
    Vista 1024x768@32Bit : 75,8 fps
    Linux 1024x768@32Bit : 45,6 fps

    Vista 1024x768@16Bit : 111,6 fps
    Linux 1024x768@16Bit : 45,6 fps

    System:
    . Dell XPS M1330 GM965 Chipset with X3100 graphics engine
    . 2x 2GB Dual-Channel DDR2-667 CL5
    . T7500 Mobile Core2Duo 2,2GHz 4MB L2 Cache 65nm

    Windows:
    . Vista 32 Bit Business
    . intel v15.12.4 (7.15.10.1666) graphics driver

    Linux 32-Bit:
    . Kernel 2.6.30-rc4
    . X-Server 1.6.1
    . intel graphics driver 2.7.99.1 UXA/DRI2
    . libdrm 2.4.9
    . Mesa 7.4.1 DRI Intel(R) 965GM GEM 20090418 2009Q1 x86/MMX/SSE2

    Game settings:
    . settings all default
    . s_initsound 0
    . snd_restart
    . r_swapInterval 0
    . timdemo 1
    . demo demo001

    Test method:
    . each test reflects average fps and is the mean of three consecutive runs
    . Steepstep in both OS fixed at 2,2GHz clock
    . 16Bit resolution is dithered in Windows, while Linux OpenGL stack doesn't seem to support 16Bit Rendering. Setting to 16Bit doesn't reduce visible image quality nor are the fps improved.

    I know that's just one game I tested, but I'm not a gamer and currently lacking time to benchmark more games and settings. The results contrary to the findings of nicocarbone under 64-Bit in Nexuiz. However I will use this setup to track Linux performance improvements/regressions.
     
  27. 7oby

    7oby Notebook Evangelist

    Reputations:
    151
    Messages:
    457
    Likes Received:
    0
    Trophy Points:
    30
    I did some more benchmarking:

    Unreal Tournament 2004 Demo Build 3334

    Same System and drivers as above.

    1024x768@32-Bit:
    Vista 32-Bit DirectX8 : 68,5 fps
    Vista 32-Bit DirectX8 HWTL : 50,4 fps (*)
    Vista 32-Bit DirectX9 : application crash
    Vista 32-Bit OpenGL : 57,2 fps
    Linux 32-Bit OpenGL : 23,8 fps (**)

    From what I googled UT2004 performs often faster in DirectX on ATI based cards but faster in OpenGL on nVidia cards. I conclude that the UT2004 rendering engine isn't tuned towards a particular 3D API nor is the OpenGL backend an adapter on top of DirectX.

    That suggests: DirectX 3D performance on intel graphics is likely to be higher than OpenGL performance. Hardware T&L is not a good idea here. And as already observed with Quake3 Arena: Linux OpenGL performance is still much worse. I also noticed using the OpenGL Backend for UT2004 on Windows, that character shadows are missing. Have to watch on Linux again to see whether that's also the case there.

    Command Line:
    Code:
    ut2004 "dm-rankin?spectatoronly=1?numbots=1?quickstart=1?attractcam=1" -benchmark -seconds=60 -nosound
    modified UT2004.ini:
    . MinDesiredFramerate=0
    . UseVBO=True

    (*) The intel driver runs by default all DirectX8 apps with Hardware T&L. However it switches to Software T&L rendering for UT2004 as described here. The numbers suggest intels decision to use SWTL for UT2004 seems to be good performance wise.

    (**) Loading prerecorded demos crash in Linux on my machine. Launching spectator mode with "numbots" parameter the actual bots spawn at different locations on Windows vs. Linux. This results in different automated gameplays for Windows and Linux though on each system the scenes itself playback exactly the same. "numbots=1" is still different on Windows and Linux, but the best I could do at the moment. Given the performance difference between Linux and Windows is that big that should do it.
     
  28. hitman72

    hitman72 Notebook Enthusiast

    Reputations:
    0
    Messages:
    19
    Likes Received:
    0
    Trophy Points:
    5
    yes, I play Urban Terror (Quake Arena 3 mod) and with x3100 due to OpenGL use the fps's game are quite low. Hoping in new drivers... ciao
     
  29. nicocarbone

    nicocarbone Notebook Enthusiast

    Reputations:
    1
    Messages:
    45
    Likes Received:
    0
    Trophy Points:
    15
    I apologize for no posting for a while, had a busy week. I couldn't install the Quake 3 demo on Ubuntu x64 (apparently, the installer is only for 32-bits), and Enemy Territory gives me a segmentation fault every time I try to enter a game.

    I had Urban Terror installed under Ubuntu, and it works fairly well, around 30fps at 1280x800, everything on high. I'll try it under Windows for comparison.

    7oby: could you try Nexuiz in your configuration to see if you get similar results to me?
     
  30. hitman72

    hitman72 Notebook Enthusiast

    Reputations:
    0
    Messages:
    19
    Likes Received:
    0
    Trophy Points:
    5
    On XP I play Urban Terror at 1024x768, without enemy in front I have even 50fps but with just an enemy in front of me the fps slows 15fps, with 2 enemys... The game isn't very fast even with a 7950GT, so... ciao and sorry for my poor english
     
  31. 7oby

    7oby Notebook Evangelist

    Reputations:
    151
    Messages:
    457
    Likes Received:
    0
    Trophy Points:
    30
    I don't remember what I had to do, but it wasn't easy to install. It was more than
    DISPLAY="" sh ./linuxq3ademo-1.11-6.x86.gz.sh -target ./q3a
    I think I even had to change the POSIX compatibility level.

    This is what I got:

    (min/avg/max)
    1024x768@32-Bit Normal:
    Vista 32-Bit : 7 / 16 / 37
    Linux 32-Bit : 6 / 12 / 20
    Linux 32-Bit 6 / 10 / 18 Mesa 7.5-rc1 (lightning bug; too dark)
    Linux 32-Bit 7 / 13 / 20 Mesa 7.6-devel (lightning bug; too dark + no characters drawn)

    Disabling [ ] OpenGL 2.0 shaders (GLSL) resulted in:
    Vista 32-Bit 4 / 14 / 51
    Linux 32-Bit 3 / 10 / 29 Mesa 7.4.1
    Linux 32-Bit 2 / 8 / 23 Mesa 7.5-rc1
    Linux 32-Bit 3 / 12 / 31 Mesa 7.6-devel (no characters drawn => less polygons)

    If you compare Mesa version strings, it's not surprising Mesa 7.4.1 outperforms the other ones:
    Mesa 7.4.1 DRI Intel(R) 965GM GEM 2009 0418 2009Q1 x86/MMX/SSE2
    Mesa 7.5-rc1 DRI Intel(R) 965GM GEM 2009 0114 x86/MMX/SSE2
    Mesa 7.6-devel DRI Intel(R) 965GM GEM 2009 0114 x86/MMX/SSE2

    Settings:
    . configurations as posted above
    . Normal graphics preset
    . timedemo demos/demo1
    . command line: nexuiz -nosound
    . no difference between linux glx and sdl driver

    I didn't yet test Medium/Low settings in the first place because I first wanted to figure which Mesa driver works good. Looking into the Nexuiz directory I see:
    . Linux 32 and 64 Bit builds
    . Windows only 32-Bit build (!!!)

    Therefore the test nicocarbone had done isn't an orange to orange test. A 64-bit application build is slightly faster than the 32-Bit on. Furthermore if you launch the 32-Bit build of Nexuiz on Win7 64-Bit what kind of OpenGL stack is called? Is it a 64-Bit one accessed through WoW64? Or a 32-Bit one? I know from IE/Firefox that running the 32-Bit version will only pull in 32-Bit plugsins and vica-versa. And if Nexuiz pulls in a 32-Bit OpenGL driver there might be bigger performance differences.

    Once I knew that Mesa 7.4.1 performs best and seems (at least among all GEM enabled releases - the older TTM based ones might still be faster (*)), I threw in some more compile strings "-O3 -march=core2 -mfpmath=sse" and tested the Normale/Medium/Low settings:

    [​IMG]

    What can be seen from this picture?

    . First of all the findings of the upper part are basically consistent with the findings nicocarbone did here. If you would color his numbers according to the same system it would look almost identical. That means the average and min fps are higher in Medium/Low settings for Linux in Nexuiz.

    . However I still wouldn't call the Linux drivers more optimized. The picture changes towards the favor of Windows if you disable OpenGL 2.0 Shaders to give just one example. And I tested two more apps: Quake3 Arena and UT2004. And these offered 1,5x - 2x the performance under Windows. And even more if you are able to use DirectX. The use of DirectX might be one of the reasons why CoD2 performs fairly well on the X3100 chipsets. At least this is what I read in this thread - as I said: I'm not a gamer.

    (*) Actually I tested this as well in the meantime with TTM based Mesa 7.2 included in Ubuntu 8.10. In general more rendering bugs resulting in less polygons drawn. Can't really be compared - however it's not faster than Mesa 7.4
     
    Last edited by a moderator: May 8, 2015
  32. nicocarbone

    nicocarbone Notebook Enthusiast

    Reputations:
    1
    Messages:
    45
    Likes Received:
    0
    Trophy Points:
    15
    I played Urban Terror On-line under Ubuntu x64, and even with many enemies on screen, it never went under 20-25 fps (and never higher than 35-40 with no enemies), and at 1280x800. As soon as I can I'll try Urban Terror under Win7 and post my results.

    7Oby:
    You did a great and incredible detailed analysis. Your findings are similar to mine, but your Windows performance is a bit higher (compared to Linux's ones, overall you have better performance on both OS but you have a faster CPU). As you said, maybe the usage of 32 bit OpenGL under Windows x64 may explain this, however I saw several benchmarks comparing Vista x64 with Vista x86 showing that there is no real difference in x86 game's performance. May this be a problem of X3100's drivers??

    About disabling OpenGL 2.0 shaders: Don't you think that is extrange that there is very little performance difference with or without shaders under Windows? Or even more extarnge, that disabling shaders lowers FPS under Linux? I am not sure If I fully understand what that option does, but seems like disabling shaders should increase performance.

    Why do you say that CoD2 performs well with the X3100? I've found that, even using DirectX7 and 640x480, some maps (Moscu for example) are around 13-16 fps average. For comparison CoD4 with DirectX 9 (there is no option for DirectX 7, of course) performs roughly the same or even a bit better.
     
  33. 7oby

    7oby Notebook Evangelist

    Reputations:
    151
    Messages:
    457
    Likes Received:
    0
    Trophy Points:
    30
    Remember I tested in 1024 x 768 which has 30% less pixels than 1280 x 800 you tested.

    No, I don't think that it's strange. There are other OpenGL extensions such as ARB_vertex_program and ARB_fragment_program, which can achieve similar results than GLSL. It depends on the Jitter (just in time compiler), GPU architecture and game scene how they perform. In some cases it might be faster to use GLSL and in others not. However GLSL is more portable accross platforms and more general. For game developers easier to use.

    Maybe COD4 performs better than COD2 and I messed it up. I wanted to emphasize that DirectX performance on X3100 is in general faster than OpenGL performance. However your statement is consistent with want intel claims: Some Game on DirectX9 is faster than on DirectX8. And DirectX10 is even faster. If you can call X3100 fast at all. It's about the diagram in this posting which intel once presented on the GDC.
     
  34. 7oby

    7oby Notebook Evangelist

    Reputations:
    151
    Messages:
    457
    Likes Received:
    0
    Trophy Points:
    30
    I played with intel's GPA tool to investigate the general low performance of intel graphics:
    http://www.intel.com/software/gpa/
    Although X3100 isn't supported and some features don't work, I quickly recognized that the memory write speed is indeed limiting.

    To confirm this analysis I switched from dual channel memory configuration to single channel memory by pulling out one of my two 2GB modules out. I benched some games and got these results:

    [​IMG]

    The notebook uses DDR2-667 CL5 modules. And going from single channel to dual channel means raising the peak transfer rate from 5.35GB/s to 10.7GB/s.

    However the results are very surprising. If doubling the memory bandwith results in a performance boost of up to 46,6%, it clearly shows that memory bandwidth limits intel graphics.

    Sharing main memory between CPU/GPU has always had much less performance than a GPU with dedicated memory. However sharing memory can't be entirely blamed for the low intel performance. Other vendors achieve a performance which is up to 7x faster than intels solution using the same DDR2-800 memory:
    [​IMG]
    http://www.anandtech.com/mb/showdoc.aspx?i=3432&p=4
    The sideport memory also shows that the CPU isn't stealing much precious bandwidth, otherwise sideport memory would result in a greater performance win. Also the very small GPUs with dedicated memory such as the GeForce 8400M GS have a much higher performance and play in the same memory bandwidth league (here: 6.4GB/s):
    http://en.wikipedia.org/wiki/Geforce8#Technical_summary_2

    I did some digging, but still can't get a clear answer to why intel graphics performance is limited by memory bandwidth so much. Here some unsorted thoughts:

    . Its really the write speed that matters. Reducing the number and size of textures (MIP levels) hardly makes a difference. This is obvious since a single texture is used multiple times in a single 3D scene and GPUs use caching. But to render the textures you actually have to write a great amount of main memory. However the bandwidth of 1024x768x32Bit@50fps is just 0.15GB/s.

    . It was guessed once that intel uses a bad hidden surface detection algorithm. I couldn't find any evidence for that, but actually the opposite: hidden surface detection works. I can't tell how much headroom for improvement is left.

    . Intel has several technologies is place to efficiently use the limited available bandwith such as zone rendering and hardware tiling. Still others seem to have even more efficient technologies in place. Or maybe intel doesn't have sufficiently big GPU hardware write buffers which combine memory accesses to larger bursts.

    I really don't know. Still I like intel graphics especially since they develop open source graphics drivers for it (or have it developed depending on the component). So I'm looking forward to Nehalems in notebooks with IGPs such as Arrandale ( click). Unfortunately it doesn't support DDR3 triple channel :confused: which might turn out being useful for IGP graphics.
     
  35. nicocarbone

    nicocarbone Notebook Enthusiast

    Reputations:
    1
    Messages:
    45
    Likes Received:
    0
    Trophy Points:
    15
    You are right, I didn't realize of that fact, and it surely explains the difference.

    Very nice finding. I always suspected memory bandwith was a bottleneck, but didn't it was that important.

    About bad hidden surface detection, I do see that in some games (for example, some custom maps of COD2) scenes with lots of geometry renders really slow on the X3100, even if most of the geometry is hidden. In fact in some of this maps, the performance is low even if you are looking into a wall, but only if there is lots of geometry behind it. May this be related to bad hidden detection algorithm?
     
  36. ondris

    ondris Notebook Enthusiast

    Reputations:
    3
    Messages:
    20
    Likes Received:
    0
    Trophy Points:
    5
    Wow, it's really good. Is any way to use IntelGPA with x3100?
     
  37. IntelUser

    IntelUser Notebook Deity

    Reputations:
    364
    Messages:
    1,642
    Likes Received:
    75
    Trophy Points:
    66
    Sigh. Good thing you recorded the results. I noticed 100% bandwidth resulted in approximately 55% performance increase. Dual Channel doesn't guarantee 100% bandwidth increase due to higher latency and lower efficiency among other things, but it does show the memory bandwidth dependency.

    However, people don't believe it matters for X3000/X4500. The reason in discrepancy of real world results for memory bandwidth is because it has software and hardware VS switching capability. Since Intel put software VS to be run with benchmarks like 3DMark while lots of the games run on hardware VS, in the actual game, the performance is sensitive with memory bandwidth, while in 3DMark its not(because now its running on CPU for VS).

    If you have X3100 or 4500MHD, memory bandwidth matters. DDR2-667 to DDR3-1066 will give 10-20% performance increase with hardware VS.

    The only thing I can say is, not anymore, at least not the zone rendering part. That disappeared with X3000. Though when it worked it worked well with the GMA950. X3000 has Early Z.

    The things you want to really want to look at are the performance scaling with resolutions:
    http://www.anandtech.com/video/showdoc.aspx?i=3420&p=4

    Pay attention to the difference in performance for Crysis/Oblivion on the G35 with different resolutions.
    800x600: 9.3
    1024x768: 8.2
    1280x1024: 6.7

    Oblivion
    800x600: 14.7
    1024x768: 12.3
    1280x1024: 9.5

    Why is the scaling so poor? I think of the another possibility that especially on Intel IGPs, the CPU is so starved for memory bandwidth, that on lower resolutions where the CPU matters more, performance is lower than its supposed to be. Since Intel wanted to save money on the IGP solution, they went for the bare minimum approach, while ATI/Nvidia creates IGP chipsets with performance in mind. One thing I expect in Arrandale/Clarkdale is that since they are overhauling the entire system, they will be overhauling the IGP.

    I'm not fully comfortable with comparing Intel with ATI/Nvidia. See ATI/Nvidia's IGPs also have the advantage of having a similar architecture as their dedicated graphics which had lots of time for optimizations by game developers alike. Intel's IGPs doesn't have exactly the same architecture, so the optimizations don't really count. Part of the problem is Intel, but another is lack of developer support. I think the appearance of tools like GPA and increasing effort by Intel to work with developers are a sign they realize that too.

    See, look at the performance for games like Spore: http://www.anandtech.com/mb/showdoc.aspx?i=3432&p=4

    Couple of months ago Intel released a driver that seemed to be entirely dedicated to making Spore work(the release notes only indicated about fixing spore). And coincidentally, Spore performs well. See for games like Crysis which Intel IGPs have absymal performance on, why would the developers of Crysis OR Intel care about having them run on an IGP?

    Yea I am too.

    I think I might have found a bit about the (surprisingly)low performance situations. Many people said that in games performance was bad with fog and smoke on. Things like Blur also brought performance down drastically. Features like Fog/Smoke/Blur might be math intensive.

    Here's what said about the architecture on the X3000:

    "Math functions (EXP, LOG, SIN, etc.) are implemented in a 16-way "Mathbox" external unit with both full and partial precision."

    Here's a tip from Intel's X4500 optimization guide:

    "Reduce the use of macro/transcendental functions where possible.
    Instructions like LOG, LIT, ARL, POW, EXP are more expensive."

    The math performance of the X3000/X4500 is weak. The IGP on Arrandale/Clarkdale is supposed to be the saving grace for Intel integrated graphics(at least rumors suggest) and the brief description hints one of the big enhancements will be for geometry processing and math performance.
     
  38. UnReaL596

    UnReaL596 Notebook Consultant

    Reputations:
    6
    Messages:
    197
    Likes Received:
    0
    Trophy Points:
    30
    i started getting this error.. a while back.. it did it every now & then.. but now its getting worst & worst...

    it goes from my 1280x800 to this 640x480 thing

    [​IMG]

    is this a serious problem or a simple fix...? looks kinda serious to me..

    im on the newest drivers as well..
     
  39. IntelUser

    IntelUser Notebook Deity

    Reputations:
    364
    Messages:
    1,642
    Likes Received:
    75
    Trophy Points:
    66
    Try reinstalling drivers??
     
  40. UnReaL596

    UnReaL596 Notebook Consultant

    Reputations:
    6
    Messages:
    197
    Likes Received:
    0
    Trophy Points:
    30
    i did.. same thing... ima try older drivers & see what happens
     
  41. ondris

    ondris Notebook Enthusiast

    Reputations:
    3
    Messages:
    20
    Likes Received:
    0
    Trophy Points:
    5
    Maybe it's problem in windows. Try reinstall.
     
  42. dukka

    dukka Notebook Consultant

    Reputations:
    11
    Messages:
    194
    Likes Received:
    0
    Trophy Points:
    30
    In my laptop it's normally the problem due to overheat, just open your laptop and start cleaning it out of dust will solve the problem I think.
     
  43. diegorborges

    diegorborges Notebook Enthusiast

    Reputations:
    1
    Messages:
    46
    Likes Received:
    0
    Trophy Points:
    15
    Wow, you guys did some surprisingly detailed analisys.

    So, in my case, I've got 2GB DDR2 533Mhz. So, If I change them for DDR2 667Mhz I'll get noticeable performance increase? And I'll go even better with 800Mhz memory?
     
  44. 7oby

    7oby Notebook Evangelist

    Reputations:
    151
    Messages:
    457
    Likes Received:
    0
    Trophy Points:
    30
    Yes, i'd assume a better video performance when going from DDR2-553 -> DDR2-667. However the GM965 (X3100) chipset won't take any advantage from DDR2-800 memory.

    Also keep in mind that if you CPU runs with FSB533 clock your memory will also run as DDR2-533 even if you plug in DDR2-667 memory. The possible combinations are listed in the GM965 chipset document:

    p.132
    [​IMG]
    http://www.intel.com/Assets/PDF/datasheet/316273.pdf

    I'm not aware of any tool that reads the current rendering clock reliable, since the author of GPU-Z has not been given this piece of information:
    http://software.intel.com/en-us/forums/user-community-for-visual-computing/topic/56443/
     
  45. diegorborges

    diegorborges Notebook Enthusiast

    Reputations:
    1
    Messages:
    46
    Likes Received:
    0
    Trophy Points:
    15
    Well... I've got an Acer Aspire 5920-6690... It's a C2D 2.0 GHz with 4MB L2 cache... it's Merom platform I think. And it says that it's FSB is 800Mhz... So, I would better change my memories for 667Mhz ones...

    That's what CPUZ says: Wait... why the hell it shows 300Mhz in the memories???

    [​IMG]
    [​IMG]
     
  46. diegorborges

    diegorborges Notebook Enthusiast

    Reputations:
    1
    Messages:
    46
    Likes Received:
    0
    Trophy Points:
    15
    Forget it. How dumb I am...

    333Mhz x 2 = 666 (667Mhz) right?

    So, No hopes for me. Maybe getting 2x2GB or changing one of my 1GB memories for one 2GB one and going with 3GB.
     
  47. diegorborges

    diegorborges Notebook Enthusiast

    Reputations:
    1
    Messages:
    46
    Likes Received:
    0
    Trophy Points:
    15
    And I would love to disable speedstep.
     
  48. 7oby

    7oby Notebook Evangelist

    Reputations:
    151
    Messages:
    457
    Likes Received:
    0
    Trophy Points:
    30
    The SPD tab of CPU-Z only reflects which capabilities are programmed into the SO-DIMM memory module. It reports those to the chipset and the chipset then decides given the already mentioned constraints which clock and timings to use.

    Your memory is indeed DDR2-667 memory and it most likely runs in this mode. You could verify this in the "Memory" tab. If the DRAM Frequency is 333MHz it runs in DDR2-667 mode.

    I see you run Vista. If you go into the advanced settings of your power plan there is a setting cpu power and if you change this to min = max = 100% then I'm pretty sure speedstep is disabled. If that's not the case you can use RMClock to do the job. However disabling Speedsteep doesn't give you more performance. There are some stupid programmed applications (read: FutureMark2006) that give higher scores when disabling speedstep. However in the general case disabling speedsteps doesn't yield in higher fps. If the application needs more CPU performance the operating system switches the CPU into these highest performance modes. You can observe this with the monitoring tool of RMClock and I think even with Vista's own tools.

    Switching from a 2x 1GB = 2GB configuration to a 2GB + 1GB = 3GB configuration will give you more office and windows performance. However fps rates will degrade by a large amount since flex mode (3GB) has a lower bandwidth then dual channel (2GB). Flex mod is the ability to run 2GB of the 3GB configuration in dual channel and the remaining 1GB in single channel. Combinded this is called flex mode. The memory performance depends on the memory location you are adressing.
     
  49. diegorborges

    diegorborges Notebook Enthusiast

    Reputations:
    1
    Messages:
    46
    Likes Received:
    0
    Trophy Points:
    15
    So, no hope for me to improve the graphics performance? :(

    Well, I'm screwed. Because I work with aftereffects and 3dsmax, so this extra 1GB of RAM will be welcome, but then I'll loose some of the already poor performance in games?

    damn.

    BTW, playing The Sims 3. yeah, runs just like Sims 2. You guys can try it. great game.
     
  50. UnReaL596

    UnReaL596 Notebook Consultant

    Reputations:
    6
    Messages:
    197
    Likes Received:
    0
    Trophy Points:
    30
    noob question.. & probably already posted pages before.. but im too lazy to scroll threw it..

    but what Drivers do you guys perfer for Windows 7??? i wanna give it a go.. & need to no what drivers to use.. im guessing the vista ones???
     
← Previous pageNext page →