Transhumanism, broadly speaking, is a futurist movement with a set of beliefs with a common theme of anticipating an evolutionary plateau beyond the current Homo sapiens. The general expectation is that in the near future greater manipulation of human nature will be possible because of the adoption of techniques apparent on the technological frontier: machine intelligence greater than that of contemporary humans, direct mind-computer interface, genetic engineering and nanotechnology. Transhumanists tend to believe that respect for human agency, even when practiced by humans in their current form, is valuable.
While frequently dismissed as mere speculation at best by most rationalists (especially in light of the many failures of artificial intelligence), transhumanism is a strongly-held belief among many computer geeks, notably such alpha geeks as synthesizer and accessible computing guru Ray Kurzweil (a believer in the "technological singularity", where technology evolves beyond humanity's current capacity to understand or anticipate it) and Sun Microsystems founder and Unix demigod Bill Joy (who believes the inevitable result of AI research is the obsolescence of humanity).
Scientific criticisms
Sadly, a lot of the underpinnings of transhumanism are based on a sort of blind-men-at-the-elephant thinking—people assuming that because it can be imagined, it must be possible. Transhumanism is particularly associated with figures in computer science, which is a field that is in some ways more math and art than a true experimental science; as a result, a great many transhumanists tend to conflate technological advancement with scientific advancement; though these two things are intimately related, they are separate things. In fact, though transhumanists strenuously deny it, a great number of their arguments are strongly faith-based —- they assume because there are no known barriers to their pet development, that it's inevitably going to happen. Seldom is the issue of unknowns—known or otherwise—factored into the predictions.
Singularity
The example of the singularity is instructive; for a great many people, at least part of the singularity hinges on being able to create a true artificial intelligence. While it's reasonable to contend that the complexity inherent in the human brain is within our technological reach, singularitarians tend to assume that having the capacity to emulate human intelligence means having the ability to. However, singularitarians hit the wall when confronted with the realities of brain development research—though a true AI may in fact be possible, there simply is not enough known about the brain to understand its functions to the degree necessary to create a workable emulation, meaning a prediction of such a creation is meaningless at best, dishonest at worst.
"Whole brain emulation" (WBE) is a term used by transhumanists to refer to, quite obviously, the emulation of a brain on a computer. While this is no doubt a possibility, it encounters two problems that keep it from being a certainty anytime in the near future. The first is a philosophical objection: For WBE to work, "strong AI" (i.e. AI equivalent to or greater than human intelligence) must be true. A number of philosophical objections have been raised against strong AI, generally contending either that the mind or consciousness is not computable or that a simulation of consciousness is not equivalent to true consciousness. There is still controversy over strong AI in the field of philosophy of mind.[5] A second possible objection is technological: WBE may be possible, but the technology to fully simulate a human brain (in the sense meant by transhumanists, at least) is a long way away. Currently, no computer (or network of computers) is powerful enough to simulate a human brain. Henry Markram, head of the Blue Brain Project, estimates that simulating a brain would require 500 petabytes of data for storage and that the power required to run the simulation would be about $3 billion. (However, he optimistically predicts this will be possible in ten years.)[6] In addition to technological limitations in computing, there are also the limits of neuroscience. Neuroscience currently relies on technology that can only scan the brain at the level of gross anatomy (e.g., fMRI, PET). Forms of single neuron imaging (SNI) have been developed recently, but they can only be used on animal subjects (usually rats) because they destroy neural tissue.
Dreams of immortality: cryonics and mind uploading
Yet another transhumanist goal is mind uploading, which is one way they claim we will be able to achieve immortality. Aside from the problems with WBE listed above, mind uploading suffers a philosophical problem, namely the "swamp man problem." That is, will the "uploaded" mind be "you" or simply a copy or facsimile of your mind. Cryonics is another favorite of transhumanists. In principle, cryonics is not impossible, but the current form of it is based largely on rank speculation and costs a load of dough.
link
There you go Dubs, you don't really differ too much from a transhumanist, most your arguments in favour are based on faith.