256MB GeForce FX PRO Better Then ATI 9800?

Published by

data/avatar/default/avatar03.webp
"Future proof" err... FX and 9800 might run directx 9 content but once the games gets heavier and uses longer shaders well need newer and better display adapters.
data/avatar/default/avatar03.webp
It'll be available REAL SOON!!!! Are you still ready? ;)
data/avatar/default/avatar03.webp
Wait til they both come out instead of having a piss match over the 2 cards :D Likeliness is that the 9800 will smoke anything FX related due to the excellent job they did manufacturing it. You would after a year of working on it that it would be simply perfect :D
data/avatar/default/avatar02.webp
Nvidia bashing is obvioulsy still the flavor of the month (or is it quarter now?). Nobody would moan if ATI released a 256MB version and they will. Of course a 256MB card will be faster than a 128MB model in any application that uses more than 128MB of concurrent textures and geometry data (add AA overhead). Sure, this is not important for current applications as they were targetted at 64MB hardware. Think future. Think cinematic effects. These things don't come for free just because you have the processing power to render the effects. It will take lots of memory to create the next generation of visuals. You don't want even an AGPx8 interface to slow things down. The real question is: Why not? Why not put 256MB on there? You know 256MB video memory will be considered minimal in a few years so why not get cracking right away? I think people are just into bashing Nvidia. Like I said, if this was ATI (and it will be soon enough) I don't think anyone would have commented negatively on it.
data/avatar/default/avatar03.webp
*cough*theregister*cough*
data/avatar/default/avatar03.webp
heh the 128mb version of the 8500 was actually a bit slower than the regular 64mb version. it just was able to run fsaa in higher resolutions.
data/avatar/default/avatar02.webp
And this has nothing to do with a 256MB configuration... It's not like the 128MB disappears. Doom3 will probably be a very good excuse to have more than 128MB of video memory. Like I said, people are irrationally bashing Nvidia on this one. This does not make it a worse product. You were disappointed and YOU won't pay $100 extra. Hint: sell your shares now if you feel you cannot trust their performance. This is nothing to spend your life moaning about.
data/avatar/default/avatar04.webp
If you saw the recent video footage of the CryEngine from CryTek, you would agree that 256MB seems mighty tasty right now. :D
data/avatar/default/avatar02.webp
"I can understand 128MB on a 9700/Pro since the wider bus mean that the card might eat the data stored there a LOT faster, and the AGP bus migth become a bottleneck there." - BetrayerX Ok, this is nonsensical. You do not need a larger memory footprint because the GPU/VPU 'eats' memory faster. Lots of stored data is reused. This depends on the application/game. If all the textures for an entire game fit in 64MB then this can be preloaded into video memory and there will never be a request for new textures over the AGP again. Most games use much more memory for textures than that, but they use it in a smart manner by using a group of textures for one level or areas of the game. This improves performance as there is no need for massive texture swapping while you are playing, but you wait to load a new level with new textures. There is no point in owning a GeforceFX if all you want to do is play Quake. The new line of hardware (from ATI as well) is not focused on doing something simple incrediby fast, it is focused on programmable complexity and performing these operations as fast as possible. They are both about doing something as fast as possible, but trust me, the reality is very different (ie, a cranked up GF4ti4600 cannot do this. It is a different ball game). It is a bit puzzling how the 9700/9800 and GF/FX turned out. On one side you have a new and very wide memory interface of 256 bits (ATI) and on the other you have a very wide core architecture (Nvidia). By this I am not referring to the number of rendring pipelines, but the programmable aspect of the core. Surely these factors will both change (increase) in future generations. If there is one thing I do find particularly amusing about the current situation, it is that the philosophy and cards released by these two companies seem to have flipped. It used to be that Nvidia was focused on getting max speed now and banking on a next generation to handle new features. ATI included some great ideas but always lost a bit of performance due to this in comparison to Nvidia. In Radeon 9700/9800 and GF/FX these roles are reversed. Radeon 9700/9800 is great for any contemporary game and will probably do ok some time into the future. The GeforceFX has specifications beyond its own operating capability; ie: it can perform operations that would not really make sense to do at this speed level. This makes sense if Nvidia is mainly interested in ramping the clock of its product. At some point the processing capability will cathc up with its feature set and it will be balanced (this may require a revised core for a revised memory interface). ATI looks like it will do what Nvidia used to; change the core for the new feature set in R400. This should not matter to consumers as they should focus on getting performance in what they deem important for the least amount of $. However, this could be important for the profitability of each company. If one simply works on juicing up the clock on the current design, leaving most engineers free to work on a the next complete core while the other company is tied up both ways, having to gradually step up features into the revised core and test. "I cannot understand 256 MB ram on any card for end users. It's ridiculous. " - BetrayedX This places you with illustrious company such as Bill Gates. At one point he could not see any reason anyone would need more than 640KB of memory. You know, deep down, that a video card with 1GB+ is not so far off. Maybe you are young and not sure how it goes, in which case I will tell you that some day you will find it amusing that people once measured memory in terms of MegaBytes. If you have seen screenshots of Doom3 or seen it in motion, know this: one day (not too far off) it will look completely real. Not just very real...it will look REAL. It will happen and it will happen sooner than you think. Just don't dream that it will happen on today's hardware because it won't. (this is not to imply that 256MB GF/FX will allow it, but speaks generally about why 256MB video memory is not ridiculous. Indeed, 256MB video memory will be deemed ridiculous soon enough. Ridiculous as in too little or "yeah, that 256MB card of yours might be good if you want to play that stick-figure game...Quake3.")