From what I know of AMD processor architecture AMD processors generally have a broader scalable range then Intel chips as well as having an integrated direct on die solution for memory control. They also generally have higher top end front side bus ratings then Intel chips (at least up until recently within the publicly available commercial arena). In essence they're technically more efficient and infinitely faster for certain types of mathematical calculations. (or potentially faster with the right hardware set up in not having to use an external memory controller too). As said before they're generally far better at handling certain types of mathematical calculations thus also mathematically intensive applications too making them a better candidate for scientific and military applications. Perfect examples would be real time weather modelling, flight navigation and weapons guidance systems amongst many others. Despite being based in Hungary (the last time I checked) and having an obvious commercial front as well as commercially available range of processors, I've a hunch that they only came into existence from military will and funding to initially develop and test military equipment.  After which they went commercial to continue to finance the operation and seemingly/evidently at a significantly lower end consumer cost then Intel. But then I'm also thinking that the reason for this might be similar to the original stipulated motives for making netbooks of repurposing expensive surplus and recycled parts that would otherwise have just been scrapped as well as using it as a means to do in the feild testing and research on mass.

If nothing else it definitely brought the cost of computing down for the general consumer with comparable specs across the GHz throwing ranges unlike netbooks which have very little deviation from a general outlined set of specs.

This is where I think it gets a little muddled. Despite the patriarchal associations and perceptions that come with the whole military thing, AMD ATi are very much perceived as the feline entity in the broader picture of the blue chip market for Intel to assume the canine standing by default if you will.
 
At the same time Intel are somehow given this standing that they have little to do with military operations and promote more educational, scientific and peaceful orientated computing pursuits (as well as generally being more expensive for comparative performance).

Despite Intel processors costing a damn sight more then AMD chips and Intel being on the commercial scene long before AMD were, Intel fanboys fail to realise that a lot of the technology that is now commonly found in modern Intel chips were used in AMD chips long before Intel chips even got a sniff of them only with different naming conventions.

Essentially Intel chips are AMD chips but with more commercial stable restrictions and caps placed on them. Other slights include different architecture thats a little more power efficient. Then there's also the obvious of a different branding, marketing and development team behind them. The 64-bit processor architecture was essentially created and developed by AMD which Intel now currently employ and its only just recently within the public commercial domain that Intel are starting to employ an on die direct memory controller with the core i7 chips.

Apple computers use Intel processors, Intel chipsets and Nvidia GPU's. How does that figure in the picture of things by your reckoning?

It might seem obvious to some but its usually the case that commercially available GPU technology standards are also very similar too when it comes to Nvidia and ATi in desktop cards despite the differing brand specific naming conventions. In this day and age there isn't much between them in  their respective power ranges other then memory amounts, slights in default GPU and memory clock speed settings and how the GPU's are cooled. However again Nvidia products seems to be a little more reserved across its general consumer GPU range as the thresholds generally tend to be a little more limited/capped for stability outside of the overclockable specialities. What will be interesting to see is how they might push the bounds of delivering more power efficiency in terms of electrical power consumption and increased graphical power at the same time for mobile computing platforms since this could then obviously always transfer back to desktops to make more power efficient desktop machines as a by product.

Anyone remember Cyrix? RIP as it were.