Alternative medicines are popular, but do any of them really work?


Alternative medicines are popular, but do any of them really work?

(ISTOCKPHOTO/ ) - Coconut oil is sometimes recommended for helping prevent Alzheimer’s disease.

(ISTOCKPHOTO/ ) – Coconut oil is sometimes recommended for helping prevent Alzheimer’s disease.

By Paul Offit

If people want to burn fat, detoxify livers, shrink prostates, avoid colds, stimulate brains, boost energy, reduce stress, enhance immunity, prevent cancer, extend lives, enliven sex or eliminate pain, all they have to do is walk in to a vitamin store and look around.

The shelves will be lined with ginkgo or rose and orange oils touted as aids for memory; guarana and cordyceps for energy; chicory root for constipation; lemon balm oil, ashwagandha, eleuthero, Siberian ginseng and holy basil for stress; sage and black cohosh for menstrual pain; coconut oil and curry powder for Alzheimer’s disease; saw palmetto for prostate health; sandalwood bark to prevent aging; garlic for high cholesterol; peppermint oil for allergies; artichoke extract and green papaya for digestion; echinacea for colds; chondroitin sulfate and glucosamine for joint pain; milk thistle for hepatitis; St. John’s wort for depression; and tongkat ali for sexual potency.

The question, however, is: Which products work? And how do we know they work? Fortunately, thanks to James Lind, we can figure it out.

When Lind climbed aboard the HMS Salisbury intent on testing whether citrus was a cure for scurvy in 1740, he moved medicine from a faith-based system to an evidence-based system. No longer do we believe in treatments. We can test them to see whether they work.

Although the size and cost of clinical studies have increased dramatically since the days of Lind, the claims made about alternative remedies are testable, eminently testable.

In that sense, there’s no such thing as alternative medicine. If clinical trials show that a therapy works, it’s good medicine. And if a therapy doesn’t work, then it’s not an alternative.

For example, Hippocrates used the leaves of the willow plant to treat headaches and muscle pains. By the early 1800s, scientists had isolated the active ingredient: aspirin. In the 1600s, a Spanish physician found that the bark of the cinchona tree treated malaria. Later, cinchona bark was shown to contain quinine, a medicine now proven to kill the parasite that causes malaria. In the late 1700s, William Withering used the foxglove plant to treat people with heart failure. Later, foxglove was found to contain digitalis, a drug that increases heart contractility. More recently, artemisia, an herb used by Chinese healers for more than a thousand years, was found to contain another anti-malaria drug, which was later called artemisinin.

“Herbal remedies are not really alternative,” writes Steven Novella, a Yale neurologist. “They have been part of scientific medicine for decades, if not centuries. Herbs are drugs, and they can be studied as drugs.”

Looking at the claims

In many case, though, when natural products have been put to the test, they’ve fallen short of their claims. For instance, although mainstream medicine hasn’t found a way to treat dementia or enhance memory, practitioners of alternative medicine claim that they have: ginkgo biloba. As a consequence, ginkgo is one of the 10 most commonly used natural products.

MORE HERE:-

http://tinyurl.com/mfmw8q7

 

 

Emerging Technologies, Fuelling New Paradigms


Emerging Technologies

      Published by Steven Novella

Most Fridays I submit a blog post to Swift, the official blog of the JREF. The article I submitted this morning is about a new study demonstrating  a brain-machine-interface (BMI) that allows a rhesus monkey to control two robotic arms at the same time. This is a technology I have been following here at NeuroLogica, blogging about it whenever I think a cool breakthrough is made.

The topic touches on several areas simultaneously that I find fascinating – neuroscience, computer technology, virtual reality, and predicting future technology. I make the point, as I often do, that predicting future technology has a terrible track record, with the only reasonable conclusion being that it is very difficult.

It’s fun to look back at past future predictions and see what people generally got right and what they got wrong, and then see if we can learn any general lessons that we can apply to predicting future technology.

Major Hurdles

For example, we are not all flying around with jetpacks or taking our flying car to work. This has become, in fact, a cliche of failed future technologies. I think the lesson here is that both of these technologies suffer from a major hurdle – fuel is heavy, and if you have to carry your fuel around with you it quickly becomes prohibitive. There just doesn’t seem to be any way to overcome this limitation with chemical fuel or batteries.

In other words, whenever the viability of a technology depends upon making a major breakthrough that changes the game with respect to some major limitation imposed by the laws of physics, you cannot count on that technology succeeding in the short to medium term. Long term – all bets are off.

The coming hydrogen economy is another example. It turns out, safely and efficiently storing for convenient release large amounts of hydrogen is a non-trivial technical problem that will not be solved as a matter of course.

Incremental Advance

By contrast, even in the 1980s, but certainly by the early 1990′s the promise of the coming internet was in the air. I remember reading fiction, popular science articles, and talking about how the world will change when information becomes digital and ubiquitous. No one predicted ebay and Twitter specifically, but certainly online commerce and communication were anticipated.

The difference here is that computer and electronic technologies had a proven track record of continuous incremental improvement, and that was all that was necessary for the dreams of the internet to become reality. You can extrapolate incremental progress much more reliably that massive breakthroughs.

Not So Fast

Smartphones, also anticipated for decades, are now a reality. The additional lesson here is that sometimes it takes longer than we predict for a technology to mature. I remember people desperately trying to make use of early portable computing devices in the 1990s (like the Newton and other PDA). I was there, using my PDA, but the functionality was just not sufficient to make it easier than a paper notebook. I’m sure some people made it work for them, but widespread adoption was just not happening.

Now, 20 years later, smartphones have finally achieved the promise of portable personal computing devices. People use smartphones not only for communication, but to quickly look up information, to update their Twitter feed, to listen to music and podcasts, as still and video cameras, and as portable GPS devices. They are still rapidly increasing in power and utility, but they have definitely passed the bar of general adoption.

As PDAs, carrying around a small computer was not that useful. It took the development of other applications to really make the technology useful, such as GPS, the internet, MP3s, and miniaturized cameras.

Yes, But What Is It Good For?

Perhaps the most difficult prediction involves how a new technology will be used. Microwaves were developed for cooking. It turns out, they are terrible tools for cooking. The technology might have completely died on the vine, except it turns out they are really convenient for heating food – defrosting, rewarming, and, of course, making popcorn. They quickly became indispensable.

Segways were supposed to change the way people move about a city. They utterly failed in this goal. However, they enjoy a niche for security guards to move around malls and airports.

This is, in my opinion, the trickiest part of predicting future technology adoption. Even when the technology itself is viable, it’s hard to predict how millions of people will react to the technology. Why are we not all using video-phones all the time? In the 1980s I would have sworn they would be in wide adoption as soon as the technology was available. Now I could, if I chose, make every phone call a video call, but I choose not to. For most calls, it’s just not worth it. I’d rather not have to look into a camera and worry about what I am doing.

Likewise, who would have thought that people would prefer texting to talking on the phone? That was a real shocker to me.

Sometimes the adoption of a specific technology depends upon someone finding a good use for it. The technology itself may be viable, but utilization really determines whether or not it will be adopted. There is no substitute for the real-world experiment of millions of people getting their hands on a technology or application and seeing if they like it.

The Future

Will all this in mind, what are the technologies that I think are likely to have a huge impact on our future? This is a huge topic, and maybe I’ll dedicate a future blog post to exploring this further, but let me name some that come to mind.

Carbon nanotubes and graphene are the plastics and the semi-conductors of the 21st century rolled into one. This material is strong and has interesting and changeable conductive properties that make them potentially usable in small, energy efficient and flexible electronics. The major limitation right now is mass producing carbon nanofibers in long lengths and large amounts efficiently and with sufficient quality. This seems to be an area of steady progress, however.

This may seem like an easy one, but stem cells clearly have tremendous potential. However, I would have to file this one under – major breakthrough still necessary in order to achieve the full potential of stem cell technology. I also think this is one that will mature 2-3 decades later than popularly anticipated. Maybe by the middle of the 21st century we will begin to see the promise of growing or regenerating organs, reversing degenerative diseases, and healing major damage and disease with stem cells.

And to bring the article back around to the original topic – brain-machine interfaces in all manifestations. The ability to affect brain function with electricity and the ability to communicate between external devices (going in both directions – sensory input and motor or other output device) mediated by a computer chip has massive implications.

On the one hand, this is a new paradigm in treating the brain by altering its function. Right now the major medical intervention for brain function is pharmacological, but this approach has inherent limits. The brain is not only a chemical organ, however, it is an electrical organ, and increasingly we are seeing electrical devices, such as deep brain stimulation, to treat neurological diseases.

Beyond that, the ability to interface a brain and a computer essentially brings neuroscience into the computer age, which further means that applications will benefit from the continued incremental advance of computer technology. It may take a few more decades than we hope or anticipate, but we can now clearly see the day when paralyzed patients can control robot legs or arms through BMI, where we can enter a virtual world and not only control but actually mentally occupy an avatar, and where people can control anything technological in their environment through thought alone.

In short, it has been demonstrated that it is possible for humans to merge with their machines. I know this sounds like hyperbole and science fiction, but the science is pretty solid if immature.

This technology is coming. What remains to be seen is what applications will develop, and how will people react