http://www.youtube.com/watch?v=eXJUGLiZkV0
if this becomes common, prepare for a 3d paradigm shift!
-
mind = blown.
-
That is a fascinating video. I really hope it pans out. For some reason my BS alarm went off, but I really hope it's wrong. My rudimentary background in computer programming tells me what they are doing is indeed possible.
If it were truly possible to render only the pixels necessary for the screen, it would be revolutionary. Then again, I still want to understand what sort of system the world geometry exists in. Some of the worlds they were demonstrating seem like they would imply incredibly massive databases. As far as I can tell, they still have to break down various objects into smaller point cloud models. For instance, the pyramids of creatures were broken down into sub-pyramids before getting down to individual creatures. I could see a data structure that creates the creature, then the sub-pyramid, and then the pyramid. This sort of hierarchy is at the basis of computer science.
I hope these guys talk to the people at Introversion. I think they'd be a great candidate for a first developer. -
It's cool, and a great idea. But like StormEfffect points out, it would seem to require massive amounts of data on the back end. 100GB Game installs for a chess game, anyone?
-
masterchef341 The guy from The Notebook
I think that a lot of the problem with this is that polygon's aren't broken. We actually have really great polygon based graphics.
From a rendering perspective, I am skeptical as to the gains that point clouds get you.
The "Unlimited" thing is just marketing. You still have to build a model of a structure or character, so there isn't more detail there than you specify. I can specify an "unlimited detail" circle just by saying radius = 3 meters, so obviously unlimited detail is not sufficient on its own to be paradigm shifting. In fact, polygons and lines also specify unlimited detail in the same way that a point does.
From a modeling / animation workflow and results perspective, I am also skeptical as to what this gets you.
Can you model 3d data as a set of colored points? Absolutely. Other than that, I am not convinced. -
http://www.downloadsquad.com/2010/0...aims-to-leave-current-3d-technologies-in-the/
Masterchef seems to have it. Any dynamic effects are difficult or impossible with this tech. Run-time lighting effects or animation, such as shadows cast by a player's flashlight or ragdoll type physics, are exceedingly difficult or impossible with this tech. -
Sounds like an interesting proposition. I suppose we'll see in about 16 months or so.
-
I want unlimited money. Can they do that for me?
Actually, that is interesting. So does this mean a new card manufacturer or would this concept be applied to nVidia and ATi's hardware? I can imagine in 20 years that perhaps 3D would be done entirely in software if computing power is high enough. I didn't even think about it, the more you shrink polygons the more and more it's getting closer to being just a point. -
mobius1aic Notebook Deity NBR Reviewer
-
Very very entertaining and interesting. Thanks
-
thinkpad knows best Notebook Deity
Well, it would absolutely solve the problem of pop in/pop up.
-
-
Lostinlaptopland Notebook Consultant
A petty point perhaps but why did they guy in the video use 10 x 7 resolution as an example? If he was wanting to appeal to gamers, why not 16 x 10 or 19 x 12?
It would have sounded more impressive that way. -
I'll wait for the tech to reach market stage before I make any judgements. It's obviously quite early. But if they were truly running that in software, it's quite impressive, but the visuals didn't look much better than a 3D Mark demo from the early 2000's/
-
great- as a concept.
wasnt impressed with the demo one bit but i guess we'll just have to wait and see if they can actually create working software that will work on harware in 16months time -
Just give it time people. It's proof of concept. If you ever based a car on the nuts, bolts, aluminum foil, 2x4's, plywood and cardboard we used to put together proof of concept samples you'd think it wouldn't work either.
-
Exactly, it's just a concept for now, wait and see how it is.
Personally I'm excited if this company can make it work, eliminating the constant need to upgrade my GPU would be the greatest PC Gaming breakthrough in my life time. -
Anyway, what does that matter, he just used that as an example, it doesn't fit with the whole scheme of things. -
How does modeling with this system work? It seems like it would be a lot mroe painstaking than polygons...
Although the search algorithm thing makes sense, and could probably even be applied to polygon systems... -
-
SoundOf1HandClapping Was once a Forge
In two years time I want to see a three-way graphics battle between these guys, ATI, and nVidia. It would be full of win.
-
H.A.L. 9000 Occam's Chainsaw
I wonder how an HD5870 would handle that?
-
-
H.A.L. 9000 Occam's Chainsaw
-
masterchef341 The guy from The Notebook
tI thought about this some more. I've decided it's garbage. At best, this is being sold as a way to increase geometric detail, at the expense of animation, lighting, and shadow. That is simply not what we need to improve modern graphics.
We are kind of at a place where the geometry is approaching good enough. Given a certain number of pixels on the screen, eventually, adding more polygons to a model won't actually change the final image on the screen. We aren't there yet, but it doesn't matter. The point is, adding more geometric detail to an image is just one way to improve image quality, and we already have a LARGE capacity to render geometric detail on screen. Lighting. Shadow. Animation. These add so much more to a scene than geometry.
I mentioned this before but I reiterate it now because I gave it more thought.
What makes this image appealing and how could it be improved?
Edit: important categories to consider: geometric detail vs lighting and shadow
A large part of crysis's appeal was the interaction you had with the buildings and such- procedural physics based animation. -
Why is it garbage? Just because it's different than what's done today. What we need is more outside the box thinking rather than iteration after iteration after iteration of what we do today.
In the end the technology may not pan out completely but I bet you nVidia and ATi will learn something and incorporate certain aspects of their technology. Don't shoot something down without even knowing all the details or seeing where it is in 16 months time.
One benefit I can see is for destructible environments. Since the 3D model is comprised of "atoms" instead of a polygon shell, you could have items blow up in random fashion without having to model stock animations or cookie cutter models. I always thought it would be nice to be able to assign an attribute to every item in a 3D world that would describe its density, brittleness, and mass so that it could break apart realistically, and without using stock animation. -
Lostinlaptopland Notebook Consultant
Seeing as he mentioned software, I do not think ati or nvidia will have anything to do with this implementation. They primarily design the hardware. What would they be needed for.
-
It would still need hardware acceleration. They are just using software for proof of concept. I am not saying nVidia or ATi will have anything to do with it, I'm just saying if this company can't take off, nVidia and/orATi can use some of their technology to help improve their product. They may even try to sell it to either one or both if they just want to license the technology.
-
Well, will have to see how this works out. Actually what graphics card nowadays is doing is about the same thing, about rendering pixels on the screen, just that which pixel to color what color is calculated in polygons, and the concept of polygon on a 3 dimensional coordinate system made is similar to real life and made development easier. I have no idea how this one is going to work though, even if it's a search algorithm to just color the screen, you still need to know what to search for.
About the interactive environment part, you can do it with polygons as well, just use a physics engine together with it to control the animation. -
The reason polygons look so good is because all the focus for ~15 years has been focussed almost exclusively on them and making them better. This could indeed be a better way of doing things, but going up against the established way of doing things will make this a rocky road for them. I don't see it panning out, but I wish them the best of luck.
-
Shadow and lighting is still extremely taxing on the latest mobile high end GPUs.
I think it's time for something similar the Xbox360. A dual chip system where the main focuses on shader and the basics. The secondary would be in control of shadow/lighting and Anti-Aliasing. This way you can have shadow/lighting/AA at no cost to performance.
This I suppose would be more ground breaking for PC Gaming. -
masterchef341 The guy from The Notebook
Lighting and shadow, and shading in general are very complicated and particular to the game and game engine. The decision to implement it across one die or two is accordingly complicated. Splitting the graphics card into two separate components doesn't give you free performance- i'm not sure exactly where that idea came from! You still need all the hardware available to do all the tasks! -
I agree that this probably wouldn't be applicable to collision detection or deformable terrain. The first problem I see is that the search algorithm would work for graphics display, but for physics modeling (similar to lighting), you'd still need to act on every particle in the system, and that would cause problems.
-
Argh these polygons are ugly!
*11 years later*
Argh this unlimited detail is ugly!!
*11 years later*
???
In other words, give it some time. -
No one is denying it's potential to make beautiful static geometry. People are just questioning whether or not it is really applicable to any sort of dynamic physics or lighting effects and the feasability of storing point cloud data for something as large as an entire game on modern storage systems.
-
Tsun thanks for that.
When I see that, isn't that amazing. Warcraft II is the game that got me into gaming more. And now I'm playing games like C&C Red Alert 3. It's amazing. -
And I'm sure if you saw Crysis in development stage it wouldn't have looked any better than the "Unlimited Detail" scene. You can't compare pre-pre-Alpha to final code.
-
masterchef341 The guy from The Notebook
my problems with it aren't related to the tech demonstation clip, at all.
-
I think the real show-stopper here is dynamic effects using this unlimited detail engine. While I'm sure they can make it look gorgeous with a few artists, that still doesn't prove much.
The moment they get even a very simplistic player model actively interacting with the environment (hopefully a deformable environment) with even very rough physics and rough dynamic lighting, I'll be sold. -
I'd like to say, point cloud sounds a lot like voxel technology. They are just re-branding a well known technology. Anyways, voxel technology has a lot of problems as is (although it is very interesting: Voxelstein, Thermite) Model swap is called level of detail... it is done to save rendering data that isn't going to be seen anyways. I'm not against it, I just find the video overly optimistic and not mentioning any of the problems. Really, they don't mention any information at all... I'd like to know there current rendering system, because I find it suspicious that a single core CPU can draw that many concurrent points (each one composed of at least its 3 position, color and alpha) and then perform a fast search and cull. Also, I wonder how they perform faster transformations on a point system? Considering you'd have to pass so much more geometry (every single point included in the object) before you can simply rotate or translate the object... It just does not compute. However, if it is all good and true than paint me skeptical.
I really like this picture, it is very cute.
-
Was that done with Unlimited Detail?
-
-
I was wondering why there can't be some combination of the technologies.... i mean I am reminded of normal mapping where only a few polygons are used with a normal map to express height and depth.......
also, I thought i saw a quick bit in the demo video where they showed light/shadow interacting with the unlimited detail object but the object itself was pretty smooth so not very impressive....
anyway i am all for letting them get funded to see where they can take it as I am curious to where it would lead. -
I imagine the modeling system would be like painting with pixels, but in three dimensions (you could model the internal organs as well). -
If you watch some of their other videos.
You'll find out this is all from programming view, they don't have a graphic designer among them right now, which is why it looks bad.
The point of the videos was that there are no polygons, each object is a real object using that point cloud system.
You'll also find out that you can convert voxel to unlimited detail.
Seems interesting enough to see what potential it really has. -
I think this video (from 2008) shows similar technology (sparse voxel trees) has potential. Of course, it is much better when implemented by the GPU as that video is... Anyways, I found this while browsing this very interesting JGO thread. I think my initial estimate remains true, there are much better implementations of voxel technology out there, this unlimited detail concept seems to be all hype to get an "angel" investor.
-
One thing for sure, it seems to have gotten some attention, by that, the video demonstrations have done what they were meant to do.
-
mobius1aic Notebook Deity NBR Reviewer
Hybrid pixel+voxels is the way to go.
-
I'd LOL if they just licensed NovaLogics Voxelspace technology and are trying to sell it as new. LOL.
-
The Concept of Unlimited Detail
Discussion in 'Gaming (Software and Graphics Cards)' started by kal360, Mar 12, 2010.