Jump to content

ATI/AMD or NVIDIA

ATI/AMD or NVIDIA  

17 members have voted

  1. 1. ATI/AMD or NVIDIA

    • ATI/AMD
      7
    • NVIDIA
      10


Recommended Posts

Did I just read that right? You think AMD is anywhere near Intel performance/quality? AMD just got up to par with 1st Gen Sandy Bridge...

 

As for manual voltage control, the new GeForce 6XX series has automatic voltage setting, and going higher just fries the card. (they either get rid of the manual voltage settings or get rid of the 5 year warantees)

Don't insult me. I have trained professionals to do that.

Share this post


Link to post
Did I just read that right? You think AMD is anywhere near Intel performance/quality? AMD just got up to par with 1st Gen Sandy Bridge...

 

As for manual voltage control, the new GeForce 6XX series has automatic voltage setting, and going higher just fries the card. (they either get rid of the manual voltage settings or get rid of the 5 year warantees)

 

AMD only beats intel in integrated GPU performance, other than that and price, I'd say Intel is still better (the new Trinity APUs are pretty boss in terms of gaming performance).

 

And while that's a very valid point, Nvidia's recent stance is scaring third-party manufacturers away from allowing the user to over-volt their GPU to get the best performance possible from it. Nvidia could easily flex their policy to allow third-party manufacturers to produce these cards with over-volting features, but void the Nvidia warranty on the card and place the responsibility for covering/warrantying the card on the company that manufactured the unit. I'm surprised that's not how things work in the first place. Without the optional voltage tweaks, AMD cards will be able to easily match and even surpass Nvidia's performance when overclocked for less money.

Share this post


Link to post
Did I just read that right? You think AMD is anywhere near Intel performance/quality? AMD just got up to par with 1st Gen Sandy Bridge...

 

As for manual voltage control, the new GeForce 6XX series has automatic voltage setting, and going higher just fries the card. (they either get rid of the manual voltage settings or get rid of the 5 year warantees)

 

 

Yes you read correctly.

 

And to be honest it's not always about "What's best" It's more like "What's the best bang for your buck".

 

 

And also I remember people saying that AMD are no where near first gen i3 performance. So I think this whole Intel is better and you guys suck at buying hardware is extremely far from the case. Fast, sure. Better? Maybe, if you chew your arm and a leg plus some parts of your brain to afford it. What about the person's sense of comparing for themselves over what is the best deal based on their chipset and motherboard! Never forget that, because at the end of the day it's a personal computer; not a "FUCK YEAH MINES BETTER THAN YOURS ASASGDKHASGDJA".

 

Just saiyan.

I just... I don't even...

Share this post


Link to post

It's going to be impossible to beat Intel anyway since they have the advantage of basically having been there since the beginning and actually were popular way before ATI started up. The Intel + Microsoft combination was a staple for PC's back in the 90's and I think Intel is going to survive any competition simply because they have better funding.

 

That being said, I still support AMD all the way until they disappear. Their cards do the trick for me even if nVidia offers more advanced onboard stuff like PhysX.

Game developments at http://nukedprotons.blogspot.com

Check out my music at http://technomancer.bandcamp.com

Share this post


Link to post
It's going to be impossible to beat Intel anyway since they have the advantage of basically having been there since the beginning and actually were popular way before ATI started up. The Intel + Microsoft combination was a staple for PC's back in the 90's and I think Intel is going to survive any competition simply because they have better funding.

To brighten up a few things: When IBM decided to build their own desktop computer, the "IBM PC", they decided to use the Intel 8088 processor (basically a cheaper version of the the Intel 8086) and later switched to the Intel 80286 (an extended version of the 8086).

Also, in 1980 they decided to sign up with Microsoft to develop an operating system "MS-DOS" which they rebranded "PC-DOS"

 

For various reasons, the IBM PC was a success, despite the crappy architecture and OS in comparison of other machines at that time. One reason may be the good reputation of IBM; however IBM encuraged other companies to sell licensed clones of the IBM PC and other companies, like AMD, also started to manufacture their own 80x68 processor clones with their own extensions, which where then copied by others and so on (e.g. AMD made the original 64 bit extension to the x86 architecture).

 

What you are now most likely sitting in front of (except if you are using a "smart" phone or other embedded mobile device) is a heavily modified/evolved version of the IBM PC with an x86 based processor with a horrible mess of a patched up architecture with hundreds of extensions, backward compatible to 1979.

 

If your computer has a glowing apple on its back, your just as well sitting in front of a (horribly overpriced) IBM PC descendant. Something like an "Apple Mac" doesn't exist anymore; Apple abandoned their own Macintosh architecture.

 

 

The dominance of the Wintel platform (Windows + Intel) on the desktop and the Windows monopoly have secured that it remains that way and the x86 processor gets patched and extended all over.

 

My personal hope is that, with the shift towards mobile platforms with newer architectures and the upcoming WinDOS 8 disaster, we might someday get rid of that stonaged abomination they call "x86 architecture".

 

That being said, I still support AMD all the way until they disappear. Their cards do the trick for me even if nVidia offers more advanced onboard stuff like PhysX.

 

There's a reason the orignal Ageia PhysX PPUs flopped. People don't buy extra hardware that no software utilizes and software developers don't develop software that utilizes hardware nobody has. Nvidia graphics cards don't have a PPU on them. Nvidia acquired Ageia, but only uses the PhysX API (Application Programing Interface) and implements it with GPGPU (i.e. program running on the GPU), so it could theoretically run with GPUs from other vendors (e.g. AMD) as well, but Nvidia forces PhysX to their platform by making it refuse to work on other GPUs (and it is probably optimized for their GPUs only as well),

so developers using PhysX would either lock their software on Nvidia GPUs (or good ol' way slower CPU physics on others), or have to take the extra effort of using two physics engines with an abstraction layer in between.

 

But there are other physics engines that use GPGPU, like Bullet, which is even free (as in free speech not free beer; i.e. no restrictions, aka "Open Source"), so why use PhysX?

Share this post


Link to post
It's going to be impossible to beat Intel anyway since they have the advantage of basically having been there since the beginning and actually were popular way before ATI started up. The Intel + Microsoft combination was a staple for PC's back in the 90's and I think Intel is going to survive any competition simply because they have better funding.

To brighten up a few things: When IBM decided to build their own desktop computer, the "IBM PC", they decided to use the Intel 8088 processor (basically a cheaper version of the the Intel 8086) and later switched to the Intel 80286 (an extended version of the 8086).

Also, in 1980 they decided to sign up with Microsoft to develop an operating system "MS-DOS" which they rebranded "PC-DOS"

 

For various reasons, the IBM PC was a success, despite the crappy architecture and OS in comparison of other machines at that time. One reason may be the good reputation of IBM; however IBM encuraged other companies to sell licensed clones of the IBM PC and other companies, like AMD, also started to manufacture their own 80x68 processor clones with their own extensions, which where then copied by others and so on (e.g. AMD made the original 64 bit extension to the x86 architecture).

 

What you are now most likely sitting in front of (except if you are using a "smart" phone or other embedded mobile device) is a heavily modified/evolved version of the IBM PC with an x86 based processor with a horrible mess of a patched up architecture with hundreds of extensions, backward compatible to 1979.

 

If your computer has a glowing apple on its back, your just as well sitting in front of a (horribly overpriced) IBM PC descendant. Something like an "Apple Mac" doesn't exist anymore; Apple abandoned their own Macintosh architecture.

 

 

The dominance of the Wintel platform (Windows + Intel) on the desktop and the Windows monopoly have secured that it remains that way and the x86 processor gets patched and extended all over.

 

My personal hope is that, with the shift towards mobile platforms with newer architectures and the upcoming WinDOS 8 disaster, we might someday get rid of that stonaged abomination they call "x86 architecture".

 

That being said, I still support AMD all the way until they disappear. Their cards do the trick for me even if nVidia offers more advanced onboard stuff like PhysX.

 

There's a reason the orignal Ageia PhysX PPUs flopped. People don't buy extra hardware that no software utilizes and software developers don't develop software that utilizes hardware nobody has. Nvidia graphics cards don't have a PPU on them. Nvidia acquired Ageia, but only uses the PhysX API (Application Programing Interface) and implements it with GPGPU (i.e. program running on the GPU), so it could theoretically run with GPUs from other vendors (e.g. AMD) as well, but Nvidia forces PhysX to their platform by making it refuse to work on other GPUs (and it is probably optimized for their GPUs only as well),

so developers using PhysX would either lock their software on Nvidia GPUs (or good ol' way slower CPU physics on others), or have to take the extra effort of using two physics engines with an abstraction layer in between.

 

But there are other physics engines that use GPGPU, like Bullet, which is even free (as in free speech not free beer; i.e. no restrictions, aka "Open Source"), so why use PhysX?

Yeah, I am aware that mostly all PC's nowadays still use technically the same standard dating back to the late 70's. It's actually a bit cool in my opinion.

 

Regarding nVidia, my ATI card has no problem emulating PhysX anyway so it's not that of a big deal to me.

Game developments at http://nukedprotons.blogspot.com

Check out my music at http://technomancer.bandcamp.com

Share this post


Link to post

Yeah, I am aware that mostly all PC's nowadays still use technically the same standard dating back to the late 70's. It's actually a bit cool in my opinion.

If you depend on software from back then, it is good. The problem however is that the modern x86 CPUs are bloated with hacky extensions. There are e.g. three different syscall instructions and you have to fiddle around with some status bits to find out which to use; There are about 10 SIMD extensions(!). Some instructions need more and more clock cycles to execute because the decoding gets way more complicated, which is compensated by internal Harvard-style chaching and branch prediction to get actual speed gains; It only acts like a Von-Neumann machine for the sake of backwards comatibillity.

A friend of mine had a case at his company where a Pentium 4 outperformed an i7 (about the same clock speed, only one core used on the i7).

 

IMO the backwards compatibillity is getting (or is already) in the way of progress. IMO we should design a whole new CPU architecture from scratch, standardized and governed by a consortium of vendors (like OpenGL), to avoid the extension chaos and bloat.

 

But there are two things that keep a new architecture from taking off. For one, the Windows dominance and proprietary software in general are in the way of it. To get MS and application developers to port to a new architecture you need to convince them with a huge number of users. It can't be ported by others, because it's proprietary. For the same reason you won't get hardware drivers/hardware support. Both turns possible user away from that platform. Intel already tried that themselves (remember Itanium?). And there was already something like that consortium thing (remember PowerPC?)

 

The second reason is if you don't have a lot of users, nobody will invest money in your architecture ($2M minimum to get ASIC production started) and you can't match the development and manufacturing of Intel and thus won't even be near their performance.

 

Regarding nVidia, my ATI card has no problem emulating PhysX anyway so it's not that of a big deal to me.

Yes, because PhysX is not run on the ATI card but uses a CPU fallback instead, like I said above. See the PhysX FAQ.

Share this post


Link to post

A friend of mine had a case at his company where a Pentium 4 outperformed an i7 (about the same clock speed, only one core used on the i7).

So in other words, the i7 should perform about 4x better (actually should be more, since hyperthreading gives an extra performance boost) than a Pentium 4 when not crippled down to a single core.

 

IMO the backwards compatibillity is getting (or is already) in the way of progress. IMO we should design a whole new CPU architecture from scratch, standardized and governed by a consortium of vendors (like OpenGL), to avoid the extension chaos and bloat.

 

But there are two things that keep a new architecture from taking off. For one, the Windows dominance and proprietary software in general are in the way of it. To get MS and application developers to port to a new architecture you need to convince them with a huge number of users. It can't be ported by others, because it's proprietary. For the same reason you won't get hardware drivers/hardware support. Both turns possible user away from that platform. Intel already tried that themselves (remember Itanium?). And there was already something like that consortium thing (remember PowerPC?)

 

There have been instances where companies have used new architectures and managed to maintain compatibility with older software by means of some minor additional hardware. Apple had something like that, a separate card in one of their computers, to bring native processing for older applications, and Sony had a physical chip in their PS3 all those years ago to natively handle PS2 titles. I'm not saying that this is a viable alternative for advancing architectures (in fact, this would just leave the software at a standstill), but rather a good solution for companies or individuals that have older software where no modern equivalent exists.

 

Also, most gamers have a PowerPC-based computer in their living room and don't even know it (all three modern consoles, as well as the Nintendo Gamecube, use a processor that utilizes PowerPC architecture), so I'd hardly say users are turned away from that platform.

Share this post


Link to post

So in other words, the i7 should perform about 4x better (actually should be more, since hyperthreading gives an extra performance boost) than a Pentium 4 when not crippled down to a single core.

Adding more cores doesn't magically double performance. What a processor does is fetching an instruction and then doing something (e.g. loading a data word from RAM into a register or adding two registers together). Multiple cores means that a program can get the CPU to fetch and execute two (or more) different instructions and executing them in parallel (i.e. at the same time), so a program can boost performance by processing data in parallel if and only if the algorithm in question can process data elements independend of each other. There are problems that can't be parallelized (there's a saying some professors use at that point, something like "Bearing a child will always take 9 months, no matter how many women are involved").

The particular friend works at a company that develops embedded systems. When comparing options for CPUs, he decided (for the fun of it) to compare a single i7 core (fetch and execute instructions one by one, only one ALU, et cetera) with a Pentium IV (single core, one ALU, et cetera) at roughly the same clock speed.

 

But of course, the i7 can be faster in a real world desktop application, because the OS can run several applications, independandly on different cores with real parallelism and it will probably need less energy, as the OS can turn unneded cores down when only one application runs that doesn't utilize the other cores.

 

However, there is room for improvement, as the instruction decoding eats up more clock cycles, making the individual core slower as extensions are added.

There have been instances where companies have used new architectures and managed to maintain compatibility with older software by means of some minor additional hardware. Apple had something like that, a separate card in one of their computers, to bring native processing for older applications, and Sony had a physical chip in their PS3 all those years ago to natively handle PS2 titles. I'm not saying that this is a viable alternative for advancing architectures (in fact, this would just leave the software at a standstill), but rather a good solution for companies or individuals that have older software where no modern equivalent exists.

 

Also, most gamers have a PowerPC-based computer in their living room and don't even know it (all three modern consoles, as well as the Nintendo Gamecube, use a processor that utilizes PowerPC architecture), so I'd hardly say users are turned away from that platform.

Yes, but the gameconsoles and Apple computers are a little different from the PC market, they are closed platforms, there is only one vendor for each device that has complete control and the devices are nowhere near compatible to each other.

There are however a lot of PC hardware vendors that develop a single, open platform and software will run accross various hardware combinations. If somebody comes up with a new device, it will fade out quickly, as the desktop computer owners will stick with their PCs, as their beloved Windows, as well as an arsenal of proprietary applications they use, won't run on a different platform.

 

I don't really see how creating incompatible arcitecture would improve anything anyway, but then again I am not much of a programmer.

On the hardware side, those things I said above, on the software side, it can be way cleaner when we have an adequate, planned core instruction set instead of all those hacky extensions. Of course, yet another architecture won't do any good, what would was a transiation to it.

Share this post


Link to post

So in other words, the i7 should perform about 4x better (actually should be more, since hyperthreading gives an extra performance boost) than a Pentium 4 when not crippled down to a single core.

Adding more cores doesn't magically double performance.

 

Well duh. (I cut out the rest of the quote because it was an unneeded explanation on my part) I only care about the final user experience when I speak of performance (so yes, you are correct in saying there's really no automatic doubling of performance just by throwing more cores at the problem; the software needs to be able to utilize the resources as well). Have your friend with the Pentium IV encode a 1080p video of Big Buck Bunny using Handbrake and report the average FPS. I would not be surprised if that average FPS would be about 1/4 of that what an i7 can hit. In that case, my original statement would remain true.

 

To take from that metaphor you used, yes it's true a woman can only bear 1 child (well, technically that's not always true, but I'll let that slide) every 9 months (again, not always true, but letting it slide), it's also true that 4 woman can produce 4 children faster than a single woman can.

 

There have been instances where companies have used new architectures and managed to maintain compatibility with older software by means of some minor additional hardware. Apple had something like that, a separate card in one of their computers, to bring native processing for older applications, and Sony had a physical chip in their PS3 all those years ago to natively handle PS2 titles. I'm not saying that this is a viable alternative for advancing architectures (in fact, this would just leave the software at a standstill), but rather a good solution for companies or individuals that have older software where no modern equivalent exists.

 

Also, most gamers have a PowerPC-based computer in their living room and don't even know it (all three modern consoles, as well as the Nintendo Gamecube, use a processor that utilizes PowerPC architecture), so I'd hardly say users are turned away from that platform.

Yes, but the gameconsoles and Apple computers are a little different from the PC market, they are closed platforms, there is only one vendor for each device that has complete control and the devices are nowhere near compatible to each other.

 

A little different, but not enough to really split a hair about. Saying that the vendors have complete control of their devices is rubbish, considering my GC, PS3, and iBook are all running an operating system and custom firmware none of the venders distributed or approve of.

 

If somebody comes up with a new device, it will fade out quickly, as the desktop computer owners will stick with their PCs, as their beloved Windows, as well as an arsenal of proprietary applications they use, won't run on a different platform.

 

That's only true assuming we lived in a world without Wine and Linux, but fortunately we do not. A majority of proprietary applications can be ran just fine in a Wine bottle on a Linux machine.

Share this post


Link to post

My priority is set to budget computing as well as getting a bang out of my buck and a Quad core CPU plus 8GB of ram and a 2GB graphics card that can achieve decent frame rates is all I'm after.

 

 

Yeah, that was my goal 2 years ago. I managed to get by pretty well for $800.

"Do not inhale fumes, no matter how good they smell."

Share this post


Link to post
nvidia make the fastest available cards (for the moment). These great end designs, (9800GX2, 8800GTX), are cost accordingly. They have a lot of different cards available to mix up and bemuse the informal community, and techies in pc shops who know nothing.

AMD(Ati) are no slouches possibly, but really they have got the 3870x2,3870 and 3850 as their present range of cards - with the 48--s arriving up in the long run.

At when I would go for nvidia, they seem to be engaged with almost every game in development, and seem to run better.

 

Have I traveled 6 years back in time or this an advanced spambot with a fallacy?

Share this post


Link to post
nvidia make the fastest available cards (for the moment). These great end designs, (9800GX2, 8800GTX), are cost accordingly. They have a lot of different cards available to mix up and bemuse the informal community, and techies in pc shops who know nothing.

AMD(Ati) are no slouches possibly, but really they have got the 3870x2,3870 and 3850 as their present range of cards - with the 48--s arriving up in the long run.

At when I would go for nvidia, they seem to be engaged with almost every game in development, and seem to run better.

 

Have I traveled 6 years back in time or this an advanced spambot with a fallacy?

Exactly what I was thinking.

Don't insult me. I have trained professionals to do that.

Share this post


Link to post

I'm thinking of getting an nVidia card next actually, just to see if there is any notable difference in gaming. I mostly want one because the PhysX stuff in Borderlands 2 is making my computer lag notably, probably due to me having an ATI card atm. Either that or get another ATI card that can properly handle PhysX effects.

Game developments at http://nukedprotons.blogspot.com

Check out my music at http://technomancer.bandcamp.com

Share this post


Link to post

ATI cards really can't handle the physics used in real-time gaming, only Nvidia has decent GPU-based physics processing capabilities. (they actually have additional hardware functions that are designed for physics processing)

 

By the way, something as low as a GT630 will max out Borderlands 2 with PhysX for about $65. (if you get the EVGA GDDR5 version, less without, more with 4GB GDDR3)

Don't insult me. I have trained professionals to do that.

Share this post


Link to post

Funny, I'm actually seriously considering going with an AMD card as my next graphics card. From what I'm reading, they have better support in the Linux community and it helps (or doesn't help) that the pricing is more appealing than the competing Nvidia cards. Of course, no buying new hardware for me until Black Friday or Cyber Monday.

 

By the way, something as low as a GT630 will max out Borderlands 2 with PhysX for about $65. (if you get the EVGA GDDR5 version, less without, more with 4GB GDDR3)

 

This is a great solution for those needing PhysX support but already have a satisfactory AMD card to work with. You can actually set up your computer to use both an AMD and Nvidia card pretty easily, it just takes a minor hack that's no more complicated than following a set of instructions. The tutorial here (http://www.overclock.net/t/1306374/how-to-use-a-nvidia-card-as-physx-while-using-amd-for-primary/10) shows how to do it with certain cards, so your mileage may vary.

Share this post


Link to post
From what I'm reading, they have better support in the Linux community

Actually, Steam for Linux has been the major pushing force behind Nvidia's new Linux drivers, and they are supposedly just as good as the AMD drivers now.

Don't insult me. I have trained professionals to do that.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in the community.

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×
×
  • Create New...

This website uses cookies, as do most websites since the 90s. By using this site, you consent to cookies. We have to say this or we get in trouble. Learn more.