Don't let the classic part to the title mislead you, many of the tested and proven principals of classical science within disciplines of physics and chemistry are what allows for all our modern day tech to actually be made in the first place as well as actually function. So in this technological age where digital communication has accelerated people’s ability to connect and communicate within the kinds of progression to have shaped modern societies as a whole, it doesn't leave much for the principal of "convection" to have any overbearing applications within the complexities of human interaction in the modern digital world as a standalone notion that could be applied. With the roll out of fiber optics we're no doubt set to accelerate this even further but it’s still not a complete end to end solution. Why? Because much of the technology we currently use still relies on electrical conductivity to actually operate with very little use at current of vastly faster light signal fiber optic technology at the micro component level within domestic consumer electronics.

Nope we're still using conductive and semi-conductive materials to carry electrical signals which I guess is still nothing to scoff at in terms of where we've come from technologically. But it does mean it still has its limitations in terms of speed and efficiency.

Take micro-processor chips for example, the things inside your computers and general electrical goods that do slightly more complex stuff then just a couple of simple modes of operational functions, they're still made using the semi-conductive material silicon. We all want things to be faster but due to the laws of thermal dynamics there's only so fast you can go when it comes to technologies that still rely on electrical conductivity over conductive and semi-conductive materials.

Generally the more conductive a particular material is the better it is for electrical conductivity. But for the purposes of computers at the micro processor level we want to be able to very quickly switch this conductivity on and of in order to produce a rapid stream of electrical signals and send these instructional signals to and from the appropriate places at high speeds by altering voltage and current flows across the various pathways in the different parts of a micro-processor.


Clock cycles

The computer central micro-processor unit (could be said to be the brain of the computer) sends and receives instructions to and from the various parts of a computer in order to make a computer function, this is done in "clock cycles". You'll probably recognize the more common ratings of these in gigahertz (GHz) which are units of 1000MHz referring the frequency of the cycles per second. So for example a CPU going at 1GHz (which is 1000MHz) is turning over a 1000 clock cycles per second to send the electrical signals on and through the system. Computer overclockers constantly talk about pushing this over the recommended manufacturer base settings by increasing the voltage across the system and through the processor. The problem with this is that is by increasing the voltage in order to provide enough power to run at higher clock cycles you also increase the amount of heat that's produced as a byproduct because the silicon chip will have a default level of electrical resistance as a general property anyway. This heat energy causes the particles within the silicon chip at the atomic level to vibrate more and as a consequence the level of electrical conductivity decreases. Obviously this over heating can be avoided by having sufficient cooling and most overclockers have custom solutions with some enthusiasts even going so far as actually using liquid nitrogen as a solution.

By default processors are becoming faster with the
electrical signal pathways on semi-conductors chip becoming smaller and narrower with the relatively rigid implementation of Moors-law in doubling the number of transistors with each generation whilst all the while trying to keep it within the same kind of surface area. Its faster with there being less distance to travel with lower voltage thresholds required to allow for signal transmission across the pathways and transistors. In the last 8 years alone we've seen a shift from generations of 90nanometer, 80nanometer, right through to the current core i7 chips which are now at a crazy 32nm in the effort to cram as many transistors as possible into a tiny space. There is only so small you can actually go before it becomes too narrow even for electricity to travel through. We're pretty much already pushing that threshold at 32nm in the ongoing effort to produce faster and cooler running processors. At such a tiny fabrication level the potential for errors from signal bleeding whilst sending the instructions is more likely to occur with any anomalous uncontrolled heat or over voltaging.

Currently at 32nm it seems that for the moment the end of Moors-law might be able to technically be avoided through use of multicore processor fabrication.

Going beyond a particular CPU's materials threshold to handle voltage and causing excessive heat with lack of appropriate cooling would cause the electrical signals to bleed in other signal pathways with the increased particle vibration which would most likely cause system errors. Along with increased abnormal electrical resistance it'll most likely cause errors if it is in fact still operational and no actual memory errors have occurred before that actually even happening, but usually most modern systems will normally have a safety cut off feature which would have probably kicked in long before then unless it had actually been deactivated.