wiccsmall.png

WICC NFC antenna and wireless charger adapter (via Duracell)

 

Some innovations are coming out that will let you recharge your mobile devices wirelessly and on-the-go. The first, announced by Duracell, is called the ‘PowerMat WICC’. The WICC is a super thin add in card (as well as a NFC antenna) that enables you to grab power wirelessly from any Duracell or its competitors charging mat. This technology is also combined with an app that helps you locate the nearest charging station in case you need a re-charge before your mobile device looses all power. Your phone will require an add-in plug (or specialized case) that is said to be "easily installed," otherwise just wait for your wireless provider to have them built in with future designs. Duracell is not releasing this add-in card anytime this year as the company is waiting for phone manufacturers to come on board with adopting the WICC specifications. My guess is that it will show up around, or after, the 2012 holiday season.

 

620x350.20120215.nanotech0592-460x260.jpg

Corey Hewitt holding a piece of "Power Felt" (via Wake Forest University)


The next charging innovation needs to be ‘Felt’ rather than seen. This charger is known as ‘Power Felt’ and was designed by Wake Forest University graduate student Corey Hewitt. The two-inch piece of black fabric is comprised of carbon nano-tubes bound in plastic fibers and has thermoelectric properties that take body-heat and convert it into electrical energy. While this has potentially unlimited uses  aside from mobile device charging, its not cost effective. Demand from leading electronic manufacturers is the only way for lowering the cost of the Power Felt (about $1,000 US per kilogram) and starting tech adoption by the industry. Still, Power Felt is indeed a novel approach in recharging mobile devices just by carrying it. No word on the exact figures of heat to energy conversion.

 

I like the direction that industry people are headed. Wireless charging and energy-scavenging are paramount features in the future of staying mobile.

 

Cabe

http://twitter.com/Cabe_e14

dna usb.jpg

MinION (via Oxford Nanopore Technologies)

 

In the fast advancing technological world in which we live, it is only a matter of time before we can all readily have access to our human genome sequence. A firm in UK has built a small device that will bring us one step closer to that possibility. A surge in medical innovation may soon follow.

 

 

Oxford Nanopore Technologies has recently constructed a device that can sequence simple genomes through the USB port on your own computer. They call it the MinION, and it can sequence a genome in 2 hours. Within that time frame, it can sequence 10,000 base-pair long DNA strands."We just read the entire thing in one go," said chief technology officer Clive Brown.

 

 

The device is allowed to accomplish the enormous task of sequencing by nanopores, small organic molecules with a extremely small hole at their centers, only 10 nanometers wide. The nanopores are then placed into synthetic polymer membranes with exceedingly high electrical resistance. If a potential is applied across the membranes, it causes a current to flow through the small holes at the center of each nanopore.

 

 

When sequencing, DNA is added to a solution containing enzymes that are attracted to the ends of the DNA strands. The enzymes then attach to the ends of the nanopores, which act as a ratcheting device, feeding a  strand through one base pair at a time. Specific electrical characteristics of each of the four bases of the DNA cause a distortion in the current flow while passing through the nanopore. MinION is able to distinguish each of the four different disruptions in the current and record the sequence of DNA accordingly. The sensing components are integrated onto a small chip with approximately 512 nanopores, creating the potential to read up to 10,000 bases per second.

 

 

While other techniques currently exist to screen DNA, the MinION is superior in two unique ways. The device does not need to break apart individual strands in order to sequence them and can continuously sequence strands up to 10,000 bases. In addition, DNA does not need to be amplified to be accounted for. The MinION is expected to be out at the end of this year and will cost a customer around $900.

 

Cabe

http://twitter.com/Cabe_e14

In the future, television sets will be even more impressive than the cutting-edge sets that have just reached the market. That is according to a panel of experts at the International Solid State Circuits Conference, who said that they expects television sets to become even smarter and more intuitive than those currently available to consumers.

 

One of the biggest advancements will be the popularisation of glasses-less 3D technology, which would revolutionise the experience of watching television programmes and cinema. Additionally, the experts suggested that free-viewpoint television, which allows users to view a 3D scene by freely changing the viewpoint will also become a popular feature.

 

http://aperture.adfero.co.uk/Image/Original/14001146

 

"Over the last few years," explained David Min, vice-president of LG Electronics' software center, "there have been big changes in mobile phones and communication devices. I think similar changes will happen in television, as well."

 

"However," he added, "I think the changes that will happen in TV will be somewhat different from what has happened in mobile phones."

 

Furthermore, Mr Min said that he expects television sets to implement more smart functionality, meaning that they will become the center of entertainment in the home. "Being smart is about providing some connectivity," he observed. "In the old days, the TV was nothing but a medium. But with connectivity, the TV is getting more intelligent."

 

This comes shortly after Gene Munster, a Managing Director and Senior Research Analyst at Piper Jaffray, an international middle-market investment bank and institutional securities firm, claimed that Apple is to unveil a fully-fledged television set in time for Christmas 2012.

 

Interactive TV would available in time for "the holiday season" next year, according to Mr Munster, who argued that the product would be so impressive that people considering buying a new television set should hold off for the time being. Apple will, however, be selling the product at twice the price of an equivalent set, he confirmed.

dt.common.streams.StreamServer.cls.jpg

Chicago mayor Rahm Emanuel with STEM representatives from Microsoft, IBM, Verizon, and Motorola at the opening of one school

 

The public school system in the greater Chicago-land area is about to undergo a transformation with the help of some of tech industries heavy-weight companies. Known as STEM (Science, Technology, Engineering and Mathematics), the project is a combined effort from companies such as Microsoft, Verizon, IBM and Motorola to help educate teachers as well as students in technology related fields. The city of Chicago plans on opening five separate 6-year high-schools around the metro area that will give students the knowledge and job qualifications needed for those respective companies.

 

Instead of the typical high school diploma following a traditional four year schooling, these students will receive an associate’s degree in the intended field they want to pursue. It does not stop there, as the students' will also receive a ‘first-in-line’ job interview, as well. The schools will educate roughly 1,090 kids from grades 9 through 14 (13 & 14 are college level) and will open this coming fall 2012. For more information visit the project's site: http://www.stemedcoalition.org/

 

Chicago is starting to become a nexus for technological experimentation. Chicago's tallest building, the Willis Tower, toys with going solar. The city is pushing for an extensive network of charging stations for electric vehicles. Although not directly in the city, Argonne National Laboratories is opening the world's largest alternative energy research facility.

 

Cabe

http://twitter.com/Cabe_e14

phase a.JPGphase b.JPG

Metatronic states (via University of Pennsylvania)

 

New technology often leads to a shift in the way we all think about design. Engineers know this better than anyone, especially when it comes to circuit design. With that in mind, researchers from the University of Pennsylvania have built the first ‘lumped' optical circuit powered by light. Designed by Professor Nader Engheta at the University’s School of Engineering and Applied Science along with some of his students, the new ‘lumped’ optical circuit (a combination of different circuit elements i.e.: resistors, inductors and capacitors) uses nano-rods that replicate the function of these lumped elements. This in-turn makes the circuit open and close while only using optical (or light) wavelengths. Engheta and his team tested the new circuit technology (called metatronics) by lighting up the nano-rods with light waves situated in the mid-infrared range. According to Professor Engheta, The team found (after nine different combinations of lumped circuit elements) ‘that the optical voltage and current were altered by the optical resistor’. This means that the circuit can be changed depending on the light wavelength being used almost instantaneously! Metatronics can do this because it is the elements that make up the circuit that are being manipulated, rather than the electric field

 

Professor Engheta spoke of the initial mindset that may open the door to innovation, "Looking at the success of electronics over the last century, I have always wondered why we should be limited to electric current in making circuits... If we moved to shorter wavelengths in the electromagnetic spectrum — like light — we could make things smaller, faster and more efficient.”

 

Cabe

http://twitter.com/Cabe_e14

bb01e773.jpg

CleanSpace concept drawing (via EPFL)

 

Space is a messy place. According to NASA, it is littered with over ten million pieces of debris orbiting the Earth at a rate of around 36,000 km/h. That junk is potentially hazardous not only to astronauts but can damage or destroy spacecraft and satellites, as well. To combat this problem engineers from the Swiss Space Center at EPFL (Ecole Polytechnique Federale de Lausanne) have designed a satellite that performs a kind of intergalactic housekeeping. Called ‘CleanSpace One’, the satellite finds its target and latches on with a grappling mechanism. Once the garbage is taken hold of, CleanSpace One then de-orbits the Earth and both it and the garbage are harmlessly burned up in the planet’s atmosphere. To match the speed of the space junk in orbit, CleanSpace One will use a special ultra-compact motor that is being developed in EPFL labs to catch its target that travels anywhere from 28,000 to 36,000 km/h. The costs for developing and deploying CleanSpace One costs about $10,000,000 Swiss Francs and will be tested in the next five years with the retrieval of either the Swisscube picosatellite (launched in 2007) or the Tlsat (launched in 2010).

 

This development comes after a long line of other proposed space debris collection. Anime story line inspires many to clean up the skys. The Russian POD system is like a garbage truck in space. SETI will be used to track debris. NASA wants to use laser to shoot down larger pieces. The space station has to move constantly to avoid large bits flying into its path.

 

A Messy, and scary, place indeed.

 

Cabe

http://twitter.com/Cabe_e14

http://aperture.adfero.co.uk/Image/Original/14021285

 

Since creating its Custom Foundry division around two years ago, Intel, the world's biggest maker of semiconductors, has kept developments at the site a closely-guarded secret. This sense of secrecy has, in fact, been extended to the firm's future plans as well as details of some of the firm's clients.

 

The official line from Intel is that it is merely learning how to make chips for other companies and according to the firm, the Custom Foundry division represents a long-term growth opportunity.

 

"We formed Intel Custom Foundry a couple of years ago," explained Chuck Mulloy, a spokesman for Intel. "This is a nascent program that we are taking a slow and deliberate approach to building. We believe we have world-class manufacturing capabilities that have served us well over the years. Given that expertise we believe we there could be an opportunity for future growth for Intel."

 

Intel, as the biggest chipmaker, is under intense pressure to maintain its market dominance and therefore invests huge sums every year into research and development of new manufacturing processes. Similarly, the firm also commits a significant amount of time and money into building new manufacturing facilities. And this competitive edge is exacerbated by the fact that every new technology node costs more than the previous one.

 

But in order to maximize utilization of production facilities, Intel is compelled to make chips for other firms. The firm, though, despite its worldwide reputation and resources, does not boast experience in running a foundry business. It is, therefore, currently restricted to having to learn the features it must acquire to become a market leader in this area.

 

“We also understand that running a foundry business is can be quite different than being an integrated device manufacturer," Mr Mulloy conceded. "So we are taking a slow and steady approach to ensure that we can serve our foundry customers while taking advantage of our world class manufacturing processes."

 

Intel has confirmed that it currently has two foundry customers, Achronix and Tabula, though it claims to be working with other firms. "We have other agreements," Mr Mulloy said, adding that those customers have elected not to make themselves public.

 

The question remains, though, whether Intel's foundry business has a medium or long-term future?

Boston Dynamics "Petman" (via Boston Dynamics)

 

The Defense Advanced Research Projects Agency (DARPA) is set to develop breakthrough advancements in telepresence and remote operation of ground systems, by totally immersing soldier into robotic "Avatars." DARPA has dedicated $7 million of its 2012 budget to develop an avatar that can be used in combat and other tasks. Duties include, but are not limited to, "countering IEDs and mines, search and rescue missions, and recovering casualties during combat."

 

 

The effort is being called the "avatar program," undoubtedly inspired by the James Cameron movie, Avatar. Like the film, DARPA plans to have a soldier controlling a robot avatar from a safe location while still possessing the feeling of being present on the battlefield. The program will develop systems of communication between the avatar and soldier to effectively control the robot from some distance. If successful, it possesses the potential to save lives and reduce casualties. (At least for one side of the battle.)

 

 

DARPA is no stranger in the field of robotics. In the past, they have worked with Boston Dynamics and designed Petman, a semi-autonomous bi-pedal machine that is capable of walking similarly to humans. Additionally, AlphaDog is being built and tested to help assist soldiers in combat. AlphaDog is a large dog shaped robot that can carry up to 400 pounds and traverse 20 miles.There have also been successful investigations into robotics controlled by the mind. DARPA funded research into a prosthetic arm that was capable of many motions similar to the human arm. It has the ability to bend, twist and rotate 27 different ways, and is controlled by a microchip in the brain. It works by having the microchip record neuron activity and decode the signals to activate motor neurons that controlled the prosthetic arm.

 

Combine all these developments, the future is a grim, soulless, battlefield.

 

 

Cabe

http://twitter.com/Cabe_e14

01-1329998885-2084258.jpg

Jianhui and the controlled robotic limb (via Zhejiang University)

 

Animals are capable of much. Learning sign language or aiding with search and rescue immediately come to mind. One such test animal, the monkey Jianhui, is now adept at learning how to control robotics with their minds. Zheng Xiaoxiang and his team of researchers from Zhejiang University in Zijingang China were able to identify and decode the electrical signals in the area of the monkey’s brain used for hand movement. The researchers then surgically implanted two microchips, which are connected to over 200 neurons inside Jianhui’s brain, that interface with an external computer for deciphering the brain signals. The signals are then sent to an advanced robotic hand, which was developed by STMicroelectronics and the Bio-robotics Institute Scuola Superiore Sant’Anna, to mimic the monkey’s hand movements such as grabbing, pinching and holding. While animal rights activists may not agree with what Zheng Xiaoxiang and his team has done, the ‘Brain-Machine Interface’ does have practical implications such as enabling people with prosthetic limbs a more natural way of controlling them.

 

Jianhui is not the first to control a robot arm was controlled with thought. A similar system was already built and tested on a human subject with much success. However, that subject volunteered for the procedure, the monkey did not. It is a shame that animals get such a raw deal in science.

 

Cabe

http://twitter.com/Cabe_e14

CC1a.jpg

TV static to become WiFi

 

A bill passed recently by the U.S. Congress will allow the United States Federal Communications Commission (FCC) to auction off part of the television spectrum. The bill passed is intended to be an extension of payroll tax, allowing workers to keep more of their money when they receive their paychecks. In turn, the government’s cut of the money from the auctions will help compensate for the tax break. The auction is speculated to bring in $25 billion USD. (Someone has to pay for everything.)

 

 

The TV spectrum is a 700 MHz band previously used for broadcasting to analog television sets. However, the dawn of the digital age gave birth to a superior way of viewing television. The digital broadcasting replaced the former TV band, ushering in a "higher quality" viewing experience. After the fiasco of switching every analog TV to digital via a converter box, the TV spectrum has remained relatively dormant.

 

 

As a result, many specific bands formerly used for television broadcasting can now be used to speed up wireless carrier networks, or expand high speed internet coverage. Many corporations are seeking to purchase a part of the spectrum to construct a long distance Wi-Fi system, possibly replacing home or company modems. Other parts of the spectrum, specifically the D block, a 10 MHz portion is going to allocated to the government for emergency response teams. The goal is to establish a national broadband network for police and fire departments and other public safety organizations. Although the bill has just been passed, it may be a year or up to two years until the auctions actually take place.

 

 

Looks like the strict rules over the spectrum on the White Space Coalition may lighten up.

 

 

Cabe

http://twitter.com/Cabe_e14

project_image_2.jpg

Carbon Capture concept (via U.S. Depatement of Energy)

 

Carbon capture sequestration is a technology in which many byproducts of refining and the general energy producing process, where pollutants like carbon, sulfur dioxide, mercury and nitrous oxide, are mostly filtered from entering the atmosphere. This technology eliminates 90% of the carbon 99% of sulfur dioxide, 90% of nitrogen oxide and 99% of mercury emissions emitted from coal power plants. Oil companies, along with oil producing countries like Saudi Arabia, have been pushing the technology in hope fossil fuels will seem less destructive and will stay extremely profitable. Using this technology, carbon can be contained in underground reservoirs and can be used for industrial processes as well as a product to sell, which obviously increases its popularity amongst investors.

 

 

The European Union finally accepted the use of CCS in its plan to reduce carbon this past December.  In the U.S., CPS Energy of San Antonio has also agreed to purchase energy from the Texas Clean Energy Project. This Texas project will also use an integrated gasification combined cycle (IGCC) plant, and will still emit 10% of the carbon emitted by old power plants.

 

 

Around 2.9 million metric tons of CO2 is captured, annually, at these facilities. Of which, 2.4 metric tons (83%) are used to push out every last drop of oil at the West Texas Permian Basin oil field. The rest will be sold to whoever may need it. They claim that most of the CO2 is stopped from entering the atmosphere, but the claim is dubious.

 

 

This technology is sure to reduce a lot of the damaging effects of fossil fuels. However, if we aim to achieve sustainability, focusing resources on increasing the drilling of and use of extremely finite fuels, is obviously veering form a sustainable path. Immediately the drilling required violates the sustainability of natural environments and will always require the use of more precious fresh water usage and increase pollution.

 

project_image_1.jpg

Coal gasification concept (via Texas Clean Energy Project)

 

 

There are truly viable clean energy technologies that are nowhere as destructive, nor dirty or depleting and would certainly solve the energy crisis. Yet they are impeded by claims of not enough money. It would be possible for people to focus more resources to mass manufacturing them if there was not already so much old money in oil and such a lack of monetary agreement for profit in renewable energies. It is clear those companies have a seemingly controlling influence on the federal government and will not stop till they find a way for their particular business to grow despite the ecological harmony dictated by nature or its sustainability. 

 

 

Huge amounts of aid and funding were given to the Texas Clean Energy Project by the U.S. government alone: $450 million from the U.S. Department of Energy and $211 millions from the “American Recovery and Reinvestment Act”. But this only accounts for 27% of the final cost of one power plant. The rest of the $2.4 billion will be obtained from investors. This technology will come at a price higher than simply harnessing clean renewable energy. It is the profits forecasted which have lured investors and sparked the promise for CCS technology to become wide spread. It is easy to see that the federal and corporate governments see money as the resource, but fail to see the value in sustaining their people and the other precious resources we inherit from the earth.

 

 

One obvious question is whether we should aim to thoroughly deplete natural fossil fuels at all. These resources are extremely precious, finite, and the corner stone in many present applications, but the future is surely to find an alternative. CCS technology is sure to clean up the industry a lot and should be implemented any ways. If only we acted on some value other than money.

 

 

Cabe

http://twitter.com/Cabe_e14

An impressive 27 of the 33 major IC Insights product categories defined by the World Semiconductor Trade Statistics will experience growth this year. That is according to IC Insights, the market research analyst, which said that it expects to see 11 segments grow at a rate better than seven percent in 2012.

 

Six categories are, in fact, set to see double digit growth this year, IC Insights said. This contrasts strongly with the IC report in 2011, which forecast 18 products experiencing decline.

 

The NAND Flash memory market will lead the growth list, according to the research firm, which added that it expects nor Flash to languish at the bottom of the list. The ever-increasing demand for smartphones and tablet devices has driven the demand for NAND flash over the last few years. And looking ahead to 2012, the firm said that it expects to see the technology benefit from growing demand for solid state drives.

 

For the first time ever, in fact, IC said that it expects to see the total Flash market surpassing the DRAM market in 2012. This, the firm said, is largely due to high demand for NAND Flash, allied to ongoing weakness in dram average selling prices.

 

This year will see Wireless Telecom - Special Purpose Logic/MPR devices and 32bit MCUs lead all product segments, IC said, adding that it expects the segments to witness a 15 percent growth. And the firm said that it expects the 16bit MCU to surpass the 8bit MCU market for the first time.

 

In terms of integrated circuits (IC), meanwhile, the research firm expects the rise of automotive-related ICs to continue this year. The use of semiconductor technology in vehicles has increased in recent times thanks to safety and environmental issues.

 

Nor Flash, SRAM, EEPROM/other, DSP, gate array, and DRAM, by contrast, are all expected to experience their second consecutive year of slower sales in 2012, IC Insights said.

Charles Gervasi

The Automation Paradox

Posted by Charles Gervasi Feb 26, 2012

350px-Mercury_Friendship7_Bassett_Celestia.jpgLast week was the 50th anniversary John Glenn’s flight in which he became the first American to orbit the earth.  The issues the flight had with automated controls make me think of how automation-related issues affect the world today.

 

The automated attitude control system used a control loop that would fire thrusters to correct the orientation of the capsule.  The system used small thrusters for minor adjustments and large thrusters intended primarily for changing the capsules orbit to return to earth.  Glenn found the control loop was at times using the large thrusters for orientation adjustments that could be done with the smaller jets.  This wasted fuel that would be needed to return to earth.  (See Chaper 6 of this NASA document for further reading.)

 

This situation reminds me of driving on a hilly highway using cruise control and automatic transmission. When the road exceeds a certain grade, the cruise control opens the throttle which causes the transmission to downshift out of overdrive to get more power.  If I find this is causing needless shifts on small hills, I may disengage the cruise control and wish I could override the automatic transmission too.

 

Glenn took this approach and turned off the autopilot system.  He experimented with controlling the thrusters completely manually and by using a semi-automated fly-by-wire mode.  The fly-by-wire mode allowed Glenn to control the orientation manually with the computer working out how much to burn each thruster.  This semi-automated mode turned out to be the best choice for attitude control.  He used the fully-automated mode for re-entry but stayed ready to switch the autopilot off if necessary.

 

The paradox of automation is once an automated system is perfected it often performs worse than a less reliable system because human operators become dependent on the automation.  This appears to be what happened in the Air France Flight 447 crash in 2009.  A loss of airspeed indication caused the plane to switch from a fully automated mode to a semi-automated one.  The fully-automated mode would not allow pilots to put the plane into an aerodynamic stall.  This may have been why the pilots ignored an audible stall warning and didn’t even discuss the possibility the plane was in a stall. 

 

It is especially hard to improve the safety in commercial aviation because it is already so safe.  Maybe as automation improves further, we will just accept humans becoming less skilled because most of the time computers are more reliable.  On the other hand, systems may be made more reliable using some sort of psychological techniques in which an automated system requests input from human operators to keep the human operators sharp and engaged in the details of the system. 

20120219_102004.jpg

Stencil application of the DIY conductive ink (Via Jordan Bunker)

 

From the work of University of Illinois Urbana Champaign professor Jennifer Lewis, Chicago based "hacker" Jordan Bunker has successfully produced conductive ink using Ebay bought items. Bunker's efforts thoroughly validate the desirability of Lewis's ink project, and is sure to usher a slew of others copying the work (Mockery is the best form of flattery). Easily printing circuit boards at home is right around the corner.

 

Bunker, a member of Chicago's Pumping Station: One, released these words:


Conductive inks have a myriad of different interesting applications. As a quick, additive construction method for electronic circuits, they are especially intriguing. Unfortunately, for a long time they have been just out of reach of the hobby market. They are too expensive to buy in decent quantities, too complicated to make, too resistive to be practical, or require high annealing temperatures (which would ruin many of the materials you’d want to put traces on).

 

For those ambitious few who want to create conductive ink with the same process can see the entire tutorial at Bunker's website: http://jordanbunker.com/archives/41

 

With such an easy process, I would not be surprised if thousands of people are currently attempting to create the same solution right now. Although Jennifer Lewis has the rights to the ink, how soon until someone else tried to sell the ink in some fashion?

 

Cabe

http://twitter.com/Cabe_e14

Corning-Glass-420x276.jpgcs jf.jpg

Lotus Glass promotional images (Via Samsung & Corning)

 

Makers of Gorilla Glass, Corning Incorporated, and Samsung Mobile Display Co. are partnering to create a new glass substrate for future lines of high-end smartphones. They are combining Corning's Lotus™ Glass substrate technology and Samsung Mobile Display's organic light emitting diode (OLED) display. The market for (OLED) devices is quickly growing and they took notice.

 

Corning’s developed a glass where they focused on the high-performance displays. According to Corning, Lotus Glass has a high "thermal and dimensional stability." Although the company is pushing Lotus as the next greatest material, their development comes out of the need to withstand extreme temperature shifts when manufacturing high-resolution OLED displays. The company also boasts faster response times using the Lotus glass substrate.

 

Samsung Mobile Display president and CEO Soo In Cho offered words of faith in the new union, "Samsung Mobile Display has led the global display industry by constantly seeking innovations and challenging current technologies' limits. We are confident that combining our business powers with Corning's technology leadership will deliver greater value to our clients."

 

Samsung is at the top of the pyramid when deciding which markets are hottest. In January 2012 Samsung and Corning announced the Galaxy mobile devices (ex: 720p phones) and the Super OLED TV technology will be the first devices to utilize Lotus Glass. Will low prices come along with the new venture?

 

Cabe

http://twitter.com/Cabe_e14

 

Much of engineering innovation lately comes from electronic gadgets, like cell-phones or media players. There are plenty of purely mechanical ideas that can revolutionize the world like the smartphone. The latest comes in the category of automotive engines, shaking up the near century long practices in the field.

 

 

Researchers at Michigan State University have designed and built a prototype rotary engine that may soon be the most logical replacement for the conventional engine in hybrid cars. Norbert Mueller and his team's work on the wave disk generator has been recognized by the Energy Department’s Advanced Research Projects Agency and has been awarded $2.5 million in funding since 2009.

 

6a00d8341bf67c53ef014e874784bb970d-800wi.jpg

Wave disk generator (via Norbert Mueller)

 

The "wave disk generator" (WDG) works by spinning its rotor that is built with channels shaped like waves. While the rotor spins air and fuel fill up the channels, as the inlets become blocked off pressure increases and this causes a shock wave within the chamber to occur. As a result, the air-fuel mixture ignites, transmitting energy and the exhaust fumes are released as the rotor spins through the exhaust port openings. Very much like a turbine.

 

 

The new engine is poised to shift the industry due to its efficiency. The WDG uses 60 percent of its fuel for propulsion, which is four times larger than the paltry 15% most internal combustion engines operate at. A side benefit helps out the planet, The team stated that the engine can reduce emissions by 90% compared to standard autos. Additionally, it could make hybrid vehicles up to 30 percent lighter (1,000 pounds) due to its compact light design and ability to work without standard combustion engine parts. It is also easy to manufacture. The team stated that the WDG could reduce cost of hybrid vehicles up to 30 percent.

 

 

The new engine has enormous potential to improve efficiency of new hybrid vehicles. Furthermore, if many companies install and use this new engine in their vehicles it can substantially reduce the demand for fossil fuels. Though, I am sure the fuel industry will adjust prices to keep pace. Mueller and his team hope to have a 25-kilowatt version of the prototype finished and inside a hybrid test vehicle by the end of the year.

 

Cabe

http://twitter.com/Cabe_e14

FBOMB.jpgCOmonitorspy.jpg

(Left) F-Bomb in 3D printed enclosure (Right) F-Bomb hardware hidden in a CO monitor

 

 

Of all the mini computers available, this one might have the biggest potential of all. It is appropriately called the F-Bomb, which stands for, Falling or Ballistically-launched Object that Makes Backdoors. Malice Afterthought's Brendan O’Connor, a security researcher, has developed a system made from commercially available components and put together, costs less than 50 dollars and is capable of breaking into a network with the right software.

 

 

The purpose of the device is exactly as its name dictates. When dropped, the F-Bomb searches for networks within its range and infiltrates all the networks it finds. The size of this computer is 3.5 in by 4 in by 1 in. It is made using a Pogo Plug, 8 gig flash memory, small antennae, and a case made on a 3D printer. The components of the F-Bomb are so small that they can be easily be fitted inside any ordinary device, like a smoke detector, and covertly operate for as long as the batteries last.

 

 

The F-Bomb will be inexpensive, around $50 USD, on purpose. O'Connor explained why, "If some target is surrounded by bad men with guns, you don’t want to have to retrieve this, but you also don’t want to have to pay four or five hundred dollars for every use The idea is that it’s as close to free as possible. So you can throw a bunch of these sensors at a target and get away with losing a couple nodes in the process.”

 

 

In essence, the F-Bomb is a data collecting device. O’Connor designed it with Wi-Fi cracking software to create back doors into the network and collect private information. However, any applications and programs compatible with its Linux OS can run on the F-Bomb. With the correct sensors, it can performs other valuable data collections like meteorological or atmospheric information.

 

 

Originally, O'Connor won the DARPA awarded project "Reticle: Leaderless Command and Control" for the Cyber Fast Track program. F-Bomb was a follow up project, funded independently. The purpose of Reticle is on the hush-hush.

 

 

Cabe

http://twitter.com/Cabe_e14

Massachusetts Institute of Technology (MIT) has made an exciting announcement for the technology industry in the US and elsewhere by confirming that it is set to launch its first free course which can be studied and assessed exclusively online.

 

Beginning in March 2012, MIT explained that the course will be the first prototype of an online project and is to be known as MITx. It will offer a fully automated learning experience, according to the world-leading university, which has claimed that it intends to "shatter barriers to education".

 

Although consumer electronic products and the internet are both playing an increasingly prominent role in our lives, this influence has yet to really extend itself into the sphere of education. There are, of course, online degree courses already available at MIT. But through the ambitious, innovative new approach, the university hopes to completely remove geographical boundaries, meaning students will be able to take the course from anywhere in the world.

 

A university spokesman said that the course 6.002x: Circuits and Electronics, which is inspired by the campus-based course of the same name, is not merely a "watered-down" approach to the more conventional subject. Indeed, he insisted that the new course is equally as intense and intellectually testing.

 

According to Anant Agarwal, director of MIT's Computer Science and Artificial Intelligence Laboratory, the course has been designed in a way that will "keep it engaging". "There are interactive exercises to see if they've understood," the Professor explained.

 

MIT's provost, Rafael Reif, explained that the course is being seen as a litmus test for the concept of harmonizing education and technology, and seeing just how far the boundaries could be pushed. Some material, the Professor conceded, is best being taught face-to-face. "It's quite possible that employers will want to find out about the courses we offer," he added.

 

Apple, meanwhile, recently announced that it hopes to play a more extensive role in the education sector in the US. Phil Schiller, Apple's Senior Vice-President of World-Wide Marketing, announced plans to sell digital textbooks at an event staged in New York and suggested that such technologies could help to revolutionize classroom teaching. It remains to be seen, though, just how much of an influence technological development will have on the education sector.

L01.jpg

Concept drawing (via Joao Paulo Lammoglia)

 

People have come up with some pretty ingenious ways to recharge our portable devices. Some of the more interesting ways include the PowerTrekk "hydrogen gas pucks" to recharge our portable devices, and the nPower PEG that harnesses kinetic energy for the same purpose. However, this innovative concept, designed by Joao Paulo Lammoglia, might be the most practical by taking advantage of breathing.

 

Called ‘AIRE’, the mask uses small wind turbines to convert our breathing into electrical energy, which is then transferred by cable direct to devices such as smartphones or tablets. Unlike solar (can’t recharge at night) or wind chargers, the AIRE mask has a potentially inexhaustible resource to draw from without regard to weather or sunlight conditions. According to Joao, the mask can be used while walking, jogging, sleeping or just about any other activity the wearer chooses (SCUBA diving excluded) which makes the charger more versatile over the other designs.

 

Although this might not be the most fashionable way to charge devices, in disaster areas good-looks are not in question. I can see this having a dual purpose of protecting against whatever may be in the air in a hazardous area while at the same time keeping one connected to communication networks or powering life saving equipment. Yet another "why did I not come up with this before" device.

 

More information on AIRE (which recently won the Red Dot design award) and other projects from Joao can be found here: http://www.joaolammoglia.net/

 

Cabe

http://twitter.com/Cabe_e14

500_0_2909407_59258.jpg

Luminarie Cagna (via Gent)

 

This year’s ‘Licht Festival’ (festival of lights) that was held in Ghent, Belgium featured some pretty impressive works of art made with various light sources. While all the works presented this year are remarkable, there is one that stands out from the others. Luminarie De Cagna (translate at your own risk) created a "cathedral of light." The 90 foot tall display with over 55,000 LED’s that only consumes a scant 20 kWh of power! The company will be presenting different artworks later this year at the Jerusalem Light Festival as well as the Falla Sueca Literato Azorin held in Valencia, Spain.

 

Luminarie De Cagna is an Italian family business. During festivals in the 1930s, the company would illuminate buildings using oil and carbide lamps. They then switched to light bulbs, after their introduction, and LEDs since 2006. The impressive cathedral of light is a testament to the near 100 years of art-light mastery.

 

Another example of Luminarie de Cagna's work. Street video of the Kobe Luminarie 2011 festival

 

Cabe

http://twitter.com/Cabe_e14

Tongue-Drive-Commands_hires.jpgtongue-drive-holder_hires2.jpg

(Left) Tongue Drive System retainer. (Right) iPhone rig for wireless control. (via Georgia Tech & Maysam Ghovanloo)

 

A mouth based interface allows users to drive an electric wheelchair with only their tongue. Dubbed the "Tongue Drive System," (TDS) it was developed by researchers at the Georgia Institute of Technology for those with  "high-level spinal cord injuries" or anyone with the inability to move their limbs. From controlling a wheelchair to a cursor on a screen, the TDS gives a welcomed interface system for those in need.

 

Tongue-Drive-Retainer_hires.jpg

Retainer circuit (via Georgia Tech & Maysam Ghovanloo)

 

The device is entirely housed in a retainer, and it is unnoticeable from the outside. The TDS works by detecting a magnet, attached to the user's tongue with a tongue piercing, via four magnetic field sensors at the four corners of the retainer. The sensors detect position and wirelessly send the information to an iOS device (iPod or iPhone in the demonstrations) where it is interpreted as a cursor action or movement for the wheelchair. The retainer also houses a lithium-ion battery and induction coil for charging. After the retainer is molded for the particular user, it is encased in a vacuum-sealed dental-acrylic coating to protect from water. Additionally, a "sip-n-puff" straw sensor can work in tandem with the TDS, providing an additional switch.

 

Prior versions of the system used a headset that needed constant calibration. "Because the dental appliance is worn inside the mouth and molded from dental impressions to fit tightly around an individual’s teeth with clasps, it is protected from these types of disturbances," said Maysam Ghovanloo, associate professor and project lead at Georgia Tech.

 

Trials are being conducted at the Atlanta Shepherd Center and the Rehabilitation Institute of Chicago, where 11 volunteers with high-level spinal cord injuries have received a clinical tongue piercing for the magnet. During the two test sessions per week, for six weeks, the users are showing rapid improvement with controlling the system. The Georgia Tech team believes that patients will become more efficient with the TDS over time. This may become a life changing technology. I hope the trials lead to more funding on the project.

 

Ghovanloo showed the TDS at the IEEE International Solid-State Circuit Conference on February 20, 2012. The project is funded by the National Institute of Health, the National Science Foundation, and the Christopher and Dana Reeve Foundation.

 

Cabe

http://twitter.com/Cabe_e14

BAE Systems, the global firm that specialises in the development of advanced defence, security and aerospace systems, has revealed that it is using torches, drones and an electric Le Mans racing car as test-beds for a new kind of "structural battery" made from carbon fibre.

 

The firm, currently testing the technology in the Lola-Drayson B12/69EV, which it hopes will be come the world's fastest electric car, explained that it can save weight by building the battery into an object. Long-term, BAE hopes that the power source material will eventually be as easy to work with as existing carbon fibre.

 

Stewart Penney of BAE Systems explained to the BBC that the cutting-edge technology is much more than merely a traditional battery in a different shaped case.

 

"There are number of people that will build a battery shaped like a beam, for example, but fundamentally that is just an odd-shaped battery, it isn't a structural battery," he said. "The beauty of what we've got is that, when it's fully developed, a company will be able to go out and buy what is a standard carbon-composite material, lay out the shape, put it through the curing process and have a structural battery."

 

BAE was able to achieve this, according to Mr Penny, by merging battery chemistries into composite materials. "You take the nickel base chemistries and there are ways you can integrate that into the carbon fibre," he added.

 

The firm, which started work on the technology when it was seeking to help lighten the load on British troops carrying electronic objects, hopes to help make structural batteries relatively inexpensive in the long-term.

 

Given that the nickel-based batteries were initially intended for use in the military, they have been specifically designed to be resistant to fire and have a long working life. And going forward, BAE said that it hopes to develop lithium-based batteries capable of storing more power.

bike-sharing-experiment-launched-in-san-francisco.jpg

e-bike fleet (via austinevan)

 

In the City of San Francisco, going for a bike ride is exceptionally picturesque. But the steep hills may deter the non-enthusiasts from using their bikes as a viable means of transportation. A solution that lessens the physical stress and could influence some to take on biking is making electric bikes available for rent. The car sharing program City CarShare, in San Fransisco, plans on making 45 e-bikes available for daily rentals by the end of this year and 45 more in 2013. The program is receiving funds from the Federal Highway Administration’s Value Pricing Pilot Program, which is aiming at lowering traffic, pollutions and dependence of fossil fuels by variable pricing, meaning that prices can be bargained or negotiated.

 

 

The San Francisco Metropolitan Transportation Agency received 1.5 million dollars, $760,000 of which will go to the City CarShare program to fund 40% of the cost for 90 e-bikes scattered in 25 locations for three years.

 

 

The cost for renting a bike will be 50-70 percent lower than renting a car. Currently, customers pay as little as 5 dollars per hour and a monthly fee of 10 dollars for cars. City CarShare wants to make convenience their first priority before profit. Since bikes will be required to be returned to their original stations, City CarShare explained they will make low fees for keeping it over night to lessen the financial burden and increase convenience.

 

 

The other part of the funding will be devoted towards answering when and why individuals chose to rent out e-bikes instead of cars. The research will be conducted by UC Berkley and will attempt to analyzing the long term impact and performance of the project. Bike rentals are not unheard of, but the company City CarShare is venturing into new territory by offering e-bike and car rentals. The outcome of this trial is sure to influence other cities and companies to take on similar programs.

 

 

I would like to see this e-bike rental program to spread. There have been so many times where a e-bike would have been so useful for commuting in my city. I just do not want to buy and house a $300+ dollar bike at the moment. Hence, why I want to make one.

 

 

Cabe

http://twitter.com/Cabe_e14

Video via Nasa

 

February 24th, 2011 a dexterous humanoid robot, Robonaut 2, was launched into space to be the first of its kind in space and the first United States robot to make its way to the international space station. NASA and General Motors have been working together to create a more human like robot that can simulate and execute human motions and actions. Succeeding in creating such a robot will allow it to perform current human tasks, potentially the dangerous ones or repetitive tasks while also allowing it to use current tools and technologies used by humans.

 

 

Currently, tests are being performed on the robot in order to calibrate it before it is put to use in carrying out missions. Calibrating it involves comparing the motions of the hardware in the 1G environment on Earth, and taking it to the 0G space environment. The first tests on the robot included booting it up and making sure all the circuitry and software made it to the space station in working condition. Just recently, they began testing the movement of the joints and hands. Ultimately, the robot is controlled by a set of software parameters to keep it safe. However, its actions and movements can be controlled from a location on Earth or the space station itself.

 

 

Robonaut 2 was built and equipped with a wide range of advantages over the former Robonaut 1. It is capable of carrying out tasks four times faster than the original. Its systems include built in infrared sensors, a high resolution camera, and has an advanced sensing system. Additionally, its movement technologies include extensive neck travel, ultra-high speed joint controllers, enhanced finger and thumb movements, and series elastic joint technology.

 

 

Robonaut 2 has an identical twin on the ground that will be used to simulate missions and tasks. It will be placed in a replica environment with panels and systems that simulate that of the space station itself. After the simulation is complete, the program can be then sent to the orbiting Robonaut for execution. Future plans for the robot include creating legs for it to navigate around the space station or possibly wheels and a rover base so it can travel across terrain on a different planet.

 

 

See more robots in the element14 Robotics Group.

 

 

Cabe

http://twitter.com/Cabe_e14

sprinting.JPG

From the paper "Computational Sprinting." Showing temperature vs processing power.

 

 

Computers and mobile devices run a constant CPU speed to process all of our software. That generates heat that has to be cooled either by a heatsink or some other mechanism. To make the CPU run at faster speeds usually requires overclocking which  generates even more heat. What if you could get incredible speeds without the need for bulky cooling systems even for mobile devices?

 

That is the idea a combined team of researchers from the University of Pennsylvania and the University of California want to try with what they call ‘Computational Sprinting’. The idea is to have a multi-core (over 12 cores) CPU use incredibly fast ‘burst’ speed instead of a sustained speed which would give the user super-speed while running tasks or apps needed at that particular instance. For example, a smartphone would use one core for typical tasks while leaving the additional cores dormant until needed for more complex applications. The team demonstrated the concept in a virtual environment that ran a chip with 16 cores and found that Computational Sprinting increased performance by a factor of 10!

 

Again, the by-product of increased speed is heat. It is a CPU killer at worst and decreases the life-span of the chip at best. Exploiting thermal capacitance is the team's direction. The amount of capacitance dictates the amount of heat that a "sprint" can produce. Storing heat in the devices case or other passive components is always an option. The team looked into placing small bit of metal near the chip, but the heat storage capacity is low. They are also looking into phase-change materials (PCM), where  heat is stored in a material as it transitions between a solid to a liquid. Between sprints, the PCM returns to its original state. Currently PCMs are the only way large amounts of heat from sprinting can be handled by their prototype.

 

Their experiments showed that the responsiveness of a 16W chip can be handled by a 1W mobile device using parallel computation. If the correct PCM can be chosen, we all could have our mobile devices melting in our hands in the near future.

 

Cabe

http://twitter.com/Cabe_e14

 

Moore's Law states that the number of transistors that can be placed inexpensively into an integrated circuit will double every two years or become half their original size. In reality, it turns out that the doubling/shrinking happens every 18 months. Based on prediction, the law will hold true until somewhere between 2015 and 2020. At which point, a single transistor will be the size of one atom.

 

Can single atom transistors exist? The answer is shocking; yes,  they already do.

 

Single_atom_quantum.jpg

3D model constructed by a scanning tunnelling microscope of the single atom Phosphorus transistor (via UNSW)

 

Researchers at the University of New South Wales (UNSW), Australia, have precisely placed a single phosphorus atom between atomic-scale electrodes and control gates. UNSW Professor Michelle Simmons, leader of the project at the ARC Centre for Quantum Computation and Communication Technology, explained, "...this device is perfect... This is the first time anyone has shown control of a single atom in a substrate with this level of precise accuracy. Our group has proved that it is really possible to position one phosphorus atom in a silicon environment - exactly as we need it - with near-atomic precision, and at the same time register gates."

 

Inside a high-vacuum chamber, the team used a scanning tunnelling microscope (STM) to see/manipulate the atom on the crystalline substrate. A lithographic process was used to pattern the phosphorus atom into a usable transistor. A non-reactive layer of hydrogen was applied to the atomic circuit. The STM then removed selected hydrogen atoms, etching the surface.  A chemical reaction placed the phosphorus atoms in the center. Then everything is encapsulated in silicon. Connections through the silicon allow for control on the individual atoms. The results were theoretical agreement with what a single phosphorus atom transistor could do.

 

Although the team stated that they beat Moore's Law, they now have to manufacture inexpensive devices using the technology to solidify an actual law-break. They have only 3 years to do it. I am hoping they do so. Keep in mind, controlling individual atom is at the core of quantum computing, and this might just bring about the technological singularity much faster. (When innovation can happen in an instant, every instant.)

 

Cabe

http://twitter.com/Cabe_e14

 

See Engineering On Friday's take on this development.

Rosepoint3_p.jpg

Rosepoint chip (via Intel)

 

‘Fused’ chips are fast becoming the status quo in powering today’s mobile devices, particularly tablets and smartphones. For those of you who don’t know what fused chips are, they combine CPU’s and  For those of you who don’t know what fused chips are, they combine CPU’s and GPU’s on a single chip (or die) such as AMD’s Fusion. Intel has recently stepped up their game in this field with the introduction of their Sandy Bridge line of fused chips, but they have not stopped the integration there.

 

 

The company has recently stated that they have combined Wi-Fi with their line of Atom processors code named Rosepoint which will be unveiled at this year’s International Solid-State Circuits Conference in San Francisco. Not much is known about Rosepoint but a few ‘leaked’ images and a vague Intel press release. Details say that it features a 32nm SoC with a built-in Wi-Fi transceiver (running at a reported 2.4 GHz or 4G) with two Atom CPU’s all crammed onto the same die. Another goal is to reduce the chip-count. Although a wireless transmitter that close to other digital signals would cause interference, Intel has found some "hush-hush" way to shield the CPU from the WiFi onboard. The integration of wireless onto CPU cores means less power usage as well as costs. If all goes well, the technology could be found in mobile devices as early as 2013.

 

 

More information will be released at this year’s ISSCC so check back for an update! (ISSCC runs from February 19-23rd.)

 

Cabe

http://twitter.com/Cabe_e14

flippingalig.jpg

Quantum Dot concept image (via the Optical Society Journal "Remote switching of cellular activity and cell signaling using light in conjunction with quantum dots"

 

Lih Lin and her research team at the University of Washington have been working with Quantum Dot based stimulation of cells within the brain with surprising results. Quantum Dots (QD) are small crystal shaped particles only a few nanometers wide that behave similarly to semiconductors. They are readily excited by light. When exposed to a light source, the QDs become negatively charged. The small size and composition give them extraordinary fluorescent optical properties, and are easily adjusted by changing the size or physical composition.

 

 

Lih Lin explained where the QDs are used, "Many brain disorders are caused by imbalanced neural activity... Manipulation of specific neurons could permit the restoration of normal activity levels."

 

 

The teams succeeded in creating action potentials within the neurons by exciting quantum dots nearby. The stimulation of the QD created a negative charged surrounding it and opened up the ion channels in the neurons. The ion channels are vital to stimulating the brain cells by allowing positive charges to flow into the cell and create an action potential. Additionally, the action potential in neurons is what sends messages to other neurons or nerve cells within the body allowing a form of communication to occur. The goal is to use quantum dots to control the abnormal signal firing within the brain cause by Parkinson's, for example.

 

 

QDs can be used to treat a wide variety of brain disorders from dementia to depression. Furthermore, they may be able to treat problems within the eye and possibly blindness. The only drawback right now is creating a way to shine a light on the quantum dots while they are in the brain.

 

 

The first use of QDs will likely happen in the eye, where light is constantly absorbed. However, QDs could be delivered to the brain through the veins where they could help balance out the neural activity. Quantum Dots have a bright future in the medical field treating disorders and can possibly do so without any dangerous or unwanted side effects that come along with current brain disorder treatments.

 

 

Cabe

http://twitter.com/Cabe_e14

 

People sure love their pets. Some even go out of their way to comfort them when not at home, like hiring a sitter or taking them to a doggy day-spa. Others, like Microsoft engineer Jordan Correa, build interactive robots to keep tabs on them while away. Called Darwinbot (named after his dog Darwin), Jordan started the build using the iRobot Create, a cleaning robot at its heart. Now with version 2, Correa moved on to the Parallax Eddie platform, which used the 8-core Propeller microcontroller, Kinect sensor, and a hand full of additional features missing from the iRobot Create.

 

As a Microsoft employee, it is not surprising that the MS Robotics Developer Studio was used for the dog-interaction software. The robot is equipped with a ‘ball-launcher,’ that can hurl the ball about 15 feet, along with a Lynxmotion robotic arm that is used for ball retrieval. Included on the robot is a ‘hopper’ that dispenses treats on command (why Darwin simply doesn’t raid the dispenser is currently unknown). Housed on the front of the robot is an array of cameras that include a webcam that can pan and tilt along with a Kinect that’s used for obstacle detection and avoidance. Also included is a Slate PC that runs Skype, so Darwin can see and hear his master who controls the robot via an Xbox 360 controller.

 

It is assured that promoting Microsoft like this is the only way Correa can get away with a telepresence, play with his dog, session while on the clock. Will his dog love the robot more over time?

 

See more robots in the element14 Robotics Group.

 

Cabe

http://twitter.com/Cabe_e14

Soldier prepares UAV for Afghanistan flight.png

Soldier sets up UAV communication system in Afganistan

 

Big bandwidth, 4G, wireless networks are popping up everywhere these days; it is almost hard to find an area that does not have a fast connection. That is not the case for our men and women fighting on foreign battlefields like Afghanistan. Wireless networks out there are hard to come by even at 3G speeds. Wars are won with information. Having soldiers connected is paramount.

 

Slow data speeds will soon be a problem of the past though as DARPA’s (Defense Advanced Research Projects Agency) STO (Strategic Technology Office) looks to bring 4G level connections to even the most remote of battlefields. To do this, DARPA plans on using their Mobile Hotspots program that uses a millimeter-wave communications platform (wavelength of 10 to 1mm.) This system will be implemented in air vehicles as well as ground and will be able to give the war-fighter the speed of a typical 4G fixed tower, which is about a gigabit per second, without the infrastructure (kind of hard to hide a cell-tower in a war zone). The Mobile Hotspots program will also utilize DARPA’s ‘Fixed Wireless at a Distance program' which is essentially a high-performance cell tower that’s placed in a protected area like a forward operating base (FOB). This work is being looked at to boost UAV transmission power in hopes to extend the range.

 

DARPA Program manager Dick Ridgway explained how they will get up and running, "Mobile Hotspots will require the development of steerable antennas, efficient millimeter-wave power amplifiers, and dynamic networking to establish and maintain the mobile data backhaul network. We anticipate using commercial radio protocols, such as WiFi, WiMax or LTE [Long Term Evolution], as a cost-effective demonstration of the high-capacity backbone.  However, the millimeter-wave mobile backbone developed during this program will be compatible with other military radios and protocols.”


 

"The principle of strategy is having one thing, to know ten thousand things." - Miyamoto Musashi (Book of Five Rings)

 

Cabe

http://twitter.com/Cabe_e14

Google's ambitious bid to acquire Motorola Mobile, the international phone maker, has been rubberstamped by authorities in the US. The announcement was made a matter of hours after Google received clearance from authorities in Europe.

 

Indeed, the European Commission determined that the proposed buyout would not raise competition issues in the market for operating systems for handheld devices. US regulators, meanwhile, agreed with this verdict, but pledged to continue to pay attention to Google and its use of patents.

 

Before the deal is concluded, however, regulators in China, Taiwan and Israel must give their backing to the proposed takeover, as these countries are linked to the manufacturing process.

 

Motorola split into two divisions last year, and Google is keen to acquire the business that specialises in making mobile phones and tablets. Through the deal, Google will immediately gain access to more than 17,000 of Motorola Mobility's patents, which will help to ring fence the company against lawsuits from rival firms.

 

In addition to the patents, Google is keen to gain ownership of Motorola's wireless accessories, set-top boxes and video distribution systems, as well as the firm's wireline broadband infrastructure products.

 

But according to Joaquin Almunia, the EU's Competition Commissioner, the deal would not significantly reduce competition in what is currently a very fierce marketplace. In a statement, he added: "The commission will continue to keep a close eye on the behaviour of all market players in the sector, particularly the increasingly strategic use of patents."

 

The support of the EU for the proposed takeover is an important milestone for Google, according to the firm's Vice-President Don Harrison, who explained that Motorola remains central to the firm's long-term ambitions.

 

"As we outlined in August, the combination of Google and Motorola Mobility will help supercharge Android," he commented. "It will also enhance competition and offer consumers faster innovation, greater choice and wonderful user experiences."

 

The EU commission explained that it backed the buyout after concluding that Google is unlikely to restrict the use of Android solely to Motorola, observing that the mobile phone firm is a "minor player in the European Economic Area".

 

Google, of course, is still embroiled in various patent disputes in court rooms around the world and this takeover situation is key to a number of those ongoing battles.

ViaSat-1_w_background_1-300x231.jpgViasat introduced its Exede high-speed satellite Internet service in January. It uses narrow-beam transmission like the proposed Lightsquared system to reuse bandwidth in cellular fashion.  Viasat’s system, however, operates in the 20GHz range making it easier to realize higher-gain antennas with tight beams serving small regions of the earth. 

 

Providing "fast" Internet service to rural regions is an important infrastructure issue that has been likened to providing electricity, phone service, and highways to rural regions in the last century.  It brings up the definition of what is “fast” when it comes to data service.  Fast can mean high throughput, which means a large number of bytes per second. Or it can mean low-latency, which means it takes a short amount of time to begin receiving data.  For sending large files, transmission delay, i.e. delay associated throughput limitations, predominate.  For small files or webpages that load many small files, latency predominates.  Latency on satellite links is high due to the propagation delay of radio waves, 5us per mile. 

 

Viasat reduces the delay by downloading all the files needed for a webpage and sending them all at once.  There is still the delay of requesting a webpage and downloading it, but the system eliminates the need for the browser to download each file needed for the webpage individually. 

 

SatellitePropgationDelay.jpgThis issue underscores the need for a new benchmark to characterize Internet speed.  Most consumers know whether their Internet “speed” is 2Mbps or 5Mbps, but they don’t know typical latency to large servers.

 

Drawing an analogy to airplane speed, throughput would be the airplane’s capacity and latency would be the airplane’s speed.  If someone asks how fast an airplane can move people, they want to know more than the number of seats it has. 

 

The promotional information on Viasat’s website barely touches on latency, even though that’s where the bulk of the value of their system is.  Maybe the marketing people have determined that throughput is the only figure of merit widely recognized and a benchmark of loading pages with many files would be confusing. 

 

As technology moves forward will people become more aware of latency?  Or will people find more throughput-intensive applications, keeping throughput the primary figure of merit for data service? 

Clip from the documentary film "The Camel Race" from 2011

 

Camel racing is as popular in the UAE (United Arab Emirates) as horse racing is in the west. However, there is a vast difference between the two when it comes to the riders (Jockey’s). Traditionally, until 2005, children were used in ‘The Sport of Sheiks’ as riders until international human rights agencies learned that children, as young as 4, were suffering abuses that ranged from broken bones to starvation. (Starved so they would be as light as possible.)

 

800px-Robot_jockey_army.jpg

Gen-1 robot-jockeys (via Wiki)

 

As a replacement, the UAE has adopted robots which were first designed by Rashid Ali Ibrahim from the Qatar Scientific Club in conjunction with an obscure (robotics?) company called ‘Stanley’ back in 2003. These robots were not the performance kings they hoped for so the design was given over to K-Team (a Swiss robotic company) for a revision. After being used to human presence, the camels were naturally afraid of the original robot-jockey design. To overcome that problem, K-Team outfitted the little mechanical jockeys with sunglasses, hats, silk racing-scarves along with a human-like face. If you think that is strange, they also used traditional perfumes once used by human jockeys!

 

 

These robots were eventually revised again as the K-team models were heavy and cost about $5,500 US. The latest versions now includes mechanical legs for balance along with robotic arms that pull on the camel’s reins. Also, included is miniature pneumatic actuator that controls a riding-crop which is all controlled remotely by the camel’s owner who rides in a vehicle along-side the track. Recent reports coming out of the UAE say that some robots have were outfitted with shock devices, which are illegal now due to animal-rights activists concerned for the camels well-being.

 

It is a whole new robotics market for those interested.

 

See more robots in the element14 Robotics Group.

 

Cabe

http://twitter.com/Cabe_e14

viasatsb2p pr.JPG

ViaSat promotional image of the SurfBeam 2 Pro Portable Terminal for newcasting (via ViaSat)

 

I could have used this at CES. ViaSat is to bring mobile satellite broadband at 12 Mbps in the near future with their SurfBeam 2 Pro Portable system. The system can be setup anywhere within minutes. The whole dish fits inside a carrying case about the size of a suitcase and can be fastened together using only your hands to tighten the connections.

 

 

Once assembled, you will be assisted by a GPS system to help locate your position and align the dish correctly. It is as easy as setting the correct elevation, given by GPS, and then panning left or right to find the area with the strongest signal. Once set you are equipped with four port Ethernet router bringing internet at broadband speeds wherever you would like in North America. Additionally, an optional battery pack is available providing up to 4 hours of use to keep from draining your laptop battery.

 

viasatport.JPGviasatsurfb2.jpg

SurfBeam 2 Pro Portable system (via video demo)

 

Earlier versions of this technology provided up to 350-500kbps of data transfer, where as the SurfBeam provides up to 12Mbps(Megabits per second) download and 3Mbps of upload. It works by transferring data to the ViaSat1, a satellite designed for higher data transfer speeds in geostationary orbit around the Earth.

 

 

The technology does not fail to impress with its handling of HD video and a Skype conversation. How the system overcomes the physical distance between the transceiver and the satellite to provide a lag free experience, was not covered. It will be selling for around $20,000 which is above the price range for everyday people. Their focus is towards newscasters, the military, and other emergency organizations who may need to transmit data, news, and videos in remote locations. At first look, the price may seem rather high. Compared to the cost of a satellite truck, which are between $400,000 to millions, it looks like a sure fire win.

 

 

A similar, stationary, Sunbeam 2 for residential customers is already available. However, the system is  tied down with tiered data plans and a monthly data cap.

 

The following is a video demonstration:

 

 

Cabe

http://twitter.com/Cabe_e14

Engineering On Friday CPU GPU a Toast to Us by Cabe Atwell b.jpg

If your reading this, chances are your using a laptop or PC to do so. Being that this article is on an engineering oriented website you most likely have some knowledge as to what components are housed in either one of them. I'm talking primarily about the CPU (Central Processing Unit) and the GPU (Graphical Processing Unit) that make up the computer's brain and muscle respectively.

 

The CPU processes complex code or software and executes the instructions based on what app/software is being used. However, it has a tough time when it comes to executing code that uses high-intensive 3D images or graphics. That is where the GPU comes in. It offloads most of the work needed for images from the host CPU and allows it to crunch 1's and 0's for other tasks. Companies like AMD and NVIDIA have even combined the two on a single die (or chip).  While they work in tandem with software, they do not really communicate with each other. Sad, I know.

 

All is not lost, as engineers from North Carolina State University have found a way to overcome that problem and even give the hybrid processor a 20% increase in performance. Dr. Huiyang Zhou, an associate professor of electrical and computer engineering, and his team accomplished this by having the GPU portion of the chip handle the computations while the CPU 'fetches' the data the GPU needs from system memory. Both grab data from system memory at relatively the same speed. However, the GPU can crunch the numbers faster when it comes to graphics, but the CPU is quicker when it comes to what information the GPU will need to accomplish its task. That makes the whole process more efficient according to Dr. Zhou. In recent tests, the team found that 'fused' chips increased their performance by 21.4%, which is no small feat, as any overclocker will tell you. Some tasks even rocketed over 114% faster.

 

fusion.jpgamd-fusion-desktop-roadmap.jpg

(Left) AMD Fusion APU (Right) Partial Roadmap (via AMD)

 

The research was partly funded by AMD, and the experiment was simulated on a future Accelerated Processing Unit (APU) where there is a shared L3 cache. The technique may be publicly available rather soon.

 

Cabe

http://twitter.com/Cabe_e14

 

See more Engineering On Friday comics in the Engineering Life group.

predatorb_1.jpgAeryonscout.jpg

(Left) U.S. Military predator drone (Right) Civilian drone filming the riots in Poland over the ACTA signing

 

UAVs have been all over the news in recent months, with the recent acquisition of military grade surplus for both federal and law enforcement agencies. For them, it is legal to fly those in most major cities. For civilians, it’s a different story, as a California based realty company found out when the FAA (Federal Aviation Administration) scolded them for using UAV’s to make promo videos for potential customers. Even hobbyists in the model airplane realm have had it rough. Federal rules dictate they can only fly them in designated areas and with a predetermined height.

 

However, this might change as the United States Senate has introduced legislature that would require the FAA to revise its rules concerning private UAVs. Included in this legislation are altitude revisions for drones that weigh up 55 pounds, that is meant to avoid collisions with commercial aircraft. That situation happens more frequently than you might think. August 2011 saw a collision of a Shadow drone and a C-130 over Afghanistan.

 

Airline pilots are voicing their concern with the revision as they have to achieve a certain amount of flight hours while drone pilots do not. They feel that UAV pilots should have to meet the same standards as human controlled aircraft. There is also the concern of crashing into residential areas as it is not uncommon for UAVs to suffer catastrophic malfunctions and plummet to the earth. Rules and at least a few test sites are already in place with full implementation of regulations coming in three years. Let the spying begin!

 

Read more about the FAA rulings after this link.

 

Also, get in on the UAV scene with DARPA's open UAVforge competition.

 

Cabe

http://twitter.com/Cabe_e14

 

Modding Microsoft’s Kinect is nothing new; people have been designing new and crazy things with it for over a year already. There is even a Kinect for Windows release in February 2012 that is spurring innovation. However, occasionally someone uses it to do some pretty ingenious feats, like George MacKerron, a researcher at University College London.

 

MAcKerron took the Kinect and made an interactive ‘Depthcam’ that lets the user interact with real-time 3D images on the web. To do this George used OpenKinect Python wrapper along with Autobahn Websockets library to connect the Kinect to a web browser (in this case Chrome). To get around the networks firewall (at UCL), he used a node.js server, which is built upon Chrome’s version of JavaScript’s runtime environment. He then used CoffeScript to connect to the node.js server which translates the data received as a particle system that uses WebGL. As it stands right now, it only works with Google’s Chrome browser and is limited in content when using a mouse to pan and scan the live image, but it’s still pretty impressive all the same. To get a full rundown on George MacKerron’s Depthcam, or get all the code to try it yourself, visit his website.

 

Cabe

http://twitter.com/Cabe_e14

experiment-feat.jpg

Heat pulse switching (via York University)

 

Current hard drive capacities seem enormous compared to ten, or even five, years ago. However, the way our hard drives currently store data has a limitation on how far we can go. This limitation is commonly known as the superparamagnetic limit. Superparamagnetism sets limits due to magnetic particles that are used and their minimum size allowed before they start randomly changing directions.

 

 

Past superparamagnetic limits:

Longitudinal recording 100 - 200 Gbit/ in²

Perpendicular recording (2010) 667 Gbit - 1Tbit/ in²

 

 

A group of researchers at University of York in the United Kingdom are working on a heat-assisted magnetic recording technology that can record terabytes of information per second. Dr Alexey Kimel, of project partner Radboud University Nijmegen, explained, "For centuries, it has been believed that heat can only destroy the magnetic order. Now we have successfully demonstrated that it can, in fact, be a sufficient stimulus for recording information on a magnetic medium.”

 

 

Instead of using a strong magnetic field, it works by using pulses of heat to invert the magnet poles of certain locations. In addition, the materials used with heat-assisted magnetic recording are much more stable when working at smaller scales. This allows the researchers to create hard disks that can store information more densely, while at the same time speeding up the read/write processing. The entire system is also more energy efficient, no word of how much more.

 

 

 

The new technology has many significant advancements. However, the development is still in its early stages and much more work is needed before it will become commercially available to the public. I personally believe that solid state, non-magnetic, based storage will ultimately be the true next-gen norm. However, the heat-assisted storage will still have a place.

 

 

Cabe

http://twitter.com/Cabe_e14

California-based technology giant Google has started construction on an ambitious project designed to help stimulate innovation in Silicon Valley. The firm has announced that it will spend $120 million in order to build a new facility that will feature buildings that are screened against external radio waves and will host experiments including the use of rare gases and innovative optical coatings.

 

Although Google has refused to confirm some of the internet rumours surrounding the new facility, it has been speculated that the firm is modifying a separate lab as part of its advanced 'Project X' scheme.

 

The firm has, however, revealed that it intends to create a 'Google Experience Center', which will detail the company's landmark achievements to prospective clients. Impressively, it is set to cater for as many as 900 VIPs. Google has already started demonstrating some innovative ideas for its Android@Home' brand, such as allowing its mobile phone users to control things like music systems and other domestic appliances.

 

Writing to officials in Mountain View, California, Project Architect Andrew Burnett wrote: "The Experience Center would not typically be open to the public - consisting of invited groups, and guests whose interests will be as vast as Google's range of products, and often confidential.

 

"Therefore, the Experience Center must also operate somewhat like a museum, exhibit, or mercantile space allowing flexibility in the exhibits so that as Google's products and needs change, the space can adapt."

 

Apple, meanwhile, is also in the process of building its own facility that is set to serve a similar function. The project was actually one of the biggest passions of Steve Jobs, the former Apple Chief Executive, during his last months in the job.

 

IBM and HP, however, already boast similarly impressive facilities, meaning that innovation in the Valley is likely to continue over the coming years, with US firms seeking to stay one step ahead of their international rivals.

mastercardpaypass.jpg

(via Mastercard)

 

Mastercard introduces PayPass to the main stream in 2011, calling it the next generations’ way of electronic payment. President of MasterCard Chris McWilton stated, "We're moving toward a world beyond plastic, where consumers will shop and pay in a way that best fits their needs and lifestyles with a simple tap, click or touch in-store, online or on a mobile device."

 

One of the biggest worries when having a credit card is theft and the debts that follow. A required PIN entry is required with every PayPass use. Somewhat old in concept, it could be effective compared to those who simply forge the card holders signature. The system moved on from the black magnetic strip and added EMV (Europay, Mastercard, Visa) technology instead, aka a chip inside your credit card.

 

PayPass readers also work wirelessly, just get the card in close proximity with the receiver.  Accidently tapped twice for the same item? Do not sweat it because it recognizes the mistake and only bills you once.

 

PayPass provides an app on Android phones called "PayPass Locator," which helps users locate merchants. On the app, one can search for types of businesses or get a list/map of merchants in the area. The regular gambit of features are found in the app. Click on merchants for more info, send the location to friends, send SMS text from within the app as well.

 

 

The PayPass system will soon be available on Android phones, negating the need to carry cards. Google Wallet is a mobile payment system that stores every type of credit card onboard the phone/device, including gift cards, loyalty cards, etc. Using near field communication (NFC), the phone can transmit the same data is the same way as the PayPass cards. Like the cards, PIN entry is needed on every transaction. The only device that can perform these tasks is the Nexus S 4G from Sprint. However, several more phones are set to use the tech. Google also mentioned partnering with Apple, Microsoft, and RIM on adding Google Wallet to their devices.

 

As of 2011, there are over 300,000 PayPass merchant locations. In other words, the ecosystem has not permeated deep into our daily lives just yet. Using the system, appears to be an inevitability, for better or for worse.

 

Cabe

http://twitter.com/Cabe_e14

 

Of couse, the hacker community is hard at work tearing down this system by eavesdropping on Google Analytics, app monitor, of phone users. Although recently fixed, more surface everyday. How we just carry cold-hard-cash?

sound system concept.jpgacousticr.jpg

(Left) Concept of the system (Right) Drawing of the MEMS generator (Via Purdue University)

 

A new medical pressure sensitive implant, microelectromechanical system (MEMS), is out that uses sound as a power source. This MEMS device is a sensor that monitors the pressure of the urinary ladder and in the sack of blood vessel damages by an aneurism. Purdue University researches invented the MEMS to one day treat people with aneurism or incontinence due to paralysis patients.

 

The MEMS device uses a vibrating cantilever that is connected to the bottom of the heart by a thin. The cantilever vibrates when music is within the range of frequencies of 200-500 hertz. When the cantilever vibrates it generates electricity, and that charge is stored in a small onboard supercapacitor. When the frequency falls out of the useful range, the cantilever stops vibrating and will automatically send the electrical charge to the sensor. At the same time, the system will take the pressure readings and transmits the data wirelessly.

 

ziaie-music2.jpg

MEMS generator (via Purdue University)

 

The cantilever beam is a ceramic based lead zirconate titante material, a piezoelectric (PZT)element. The sensor is about two centimeters long. Researcher even tested the device in a water-filled balloon to see if still worked, and the test was a success. To get this device powered you can use batteries or an external transmitter.

 

The four genres tested with the MEMS were rap, blues, jazz, and rock. Among the four genres of music tested, rap rise above all when being the most effective. "Rap is the best because it contains a lot of low frequency sound, notably the bass," Purdue professor Babak Ziaie said. In rap, the vulgar words and the deep bass are put together for the listener to understand the true power of the message, which does not give the rap genre the best reputation. Soon, doctors using the device will understand the true power of Tupac songs.

 

Cabe

http://twitter.com/Cabe_e14

 

UAV’s are everywhere these days. It seems I can’t even walk out of my house without running into a quad-rotor of some sort. Even local law enforcement agencies are getting into the act with their recent acquisition of military surplus.

 

For those of you who love them, Darpa (Defense Research Products Agency) has teamed up with SSC Atlantic (Space and Naval Warfare Systems Center) to give us average ‘Joes’ the opportunity to design the next generation of Unmanned Ariel Vehicles. Called "UAVForge," (started at the end of 2011) contest is heating up in competition with both the crazy and conventional designs. The collaboration uses ‘crowd-sourcing’ as well as a virtual environment and a fictional scenario that participants use to design their UAV’s. You can design by yourself, with a team or you may join an existing team but the contest states that participants have until Feb. 23 2012 to do so.

 

The contest consists of 6 ‘milestones’ each group or person must go through before the winner is chosen.

● Milestone 1: Contestants create a concept video to show of the design to which it’s then voted on to which the winners proceed to the next level.

● Milestone 2: Proof of flight. It has to be able to fly. The winners of this stage advance to the next round of competition.

● Milestone 3: This is where things get riveting as contestants compete with a live video demonstration.

● Milestone 4: This is the competition ‘fly-off’ where contestants compete for the prize of $100,000 US and have an opportunity to pilot the vehicle in an operational exercise.

● Milestone 5: Winners are transported all expenses paid to an operational field scenario.

● Milestone 6: The winner is awarded a contract to produce 15 operational vehicles.

 

5429fea0-74ba-44bf-bdd2-8d3dfe14cac8.Large.jpg

Concept UAV from the competition (via DARPA)


The video’s for these UAV designs give two impressions to the viewer, ‘I can’t believe that flies’ to the ingenious. Standings for milestone 2 were based off of 385 individuals casting 1511 votes along with 255 comments making GremLion, an electrical and computer engineering team from the University of Singapore coming in at number 1 out of ten so far for this round. Their design looks like a hovering shop-vac with a helicopter rotor positioned vertically in the center of it. As crazy as that sounds, the design is incredibly stable and also features ground-tracking as well as obstacle avoidance capabilities. You can see all the entry videos along with all the contest rules and regulations at the UAVForge website found here: http://www.uavforge.net/

 

Whether you join in on the fray or not, this is one competition to follow.

 

Cabe

http://twitter.com/Cabe_e14

lares.jpg

LARES (via ESA & Stephane Covaja)

 

The Laser Relativity Satellite (LARES) was launched into orbit on February 13th to challenge Einstein's theory of relativity.  The satellite is made from tungsten and measures only 36 centimeters wide. In addition, it was constructed with many reflectors on its surface used to measure its position from Earth using lasers on the ground.

 

 

Past satellites were placed into space to challenge the exact same theory with varying results. One, Gravity Probe B Satellite (2004 - costing $750 million), brought in results within 19 percent while measuring the geodetic effect. This processes measures distortions in the fabric of space-time due to Earth's gravity field.  Scientists state that LARES should be able to produce precise measurements within one percent.

 

 

With such precise measurements, scientists and researchers will be able to put several theories to the test including gravitational laws, general relativity, and mainly the Lense-Thirring effect. The Lense-Thirring effect is derived from general relativity and describes how the rotation of nearby objects affects the rotation of other nearby objects. If the precision lasers pick up a disturbance in the orbit of LARES, it will verify relativity among other theories.

 

 

The satellite was launched by Vega, a small rocket used to economically launch lower mass satellites into orbit around Earth. The launching station is based in Kourou, French Guiana. The launch also has a secondary goal, to make Kaurou a ESA, and other space agencies, launch hot-spot. With NASA out of the shuttle business, Kourou is sure to be busy.

 

 

Cabe

http://twitter.com/Cabe_e14

6812933033_85c639b417.jpg6830811307_f8714afb7e_z.jpg

(Left) Cross section of the Nanoshell surface. (Right) How light, in red, propogates through the shells. (via Stanford University)

 

A paper published by a team of engineers at Stanford reveals a new way to capture energy from the sun employing nanotechnology. They made this possible by creating thin solar panels out of hollowed out nanoshells. The nanoshells themselves are made out of photovoltaic nanocrystalline-silicon that are  hollowed out at the center. The nanoshells capture light and due to their concave geometry. The surface of the material acts like a wave guide, forcing the light to circulate inside the shells. The team likened the idea to how sound propagates through a whispering-gallary. As a result, the light is better absorbed by the material due to the increase in time duration of light within the material. With a three layer structure, up to 75% of critical spectrums of light are absorbed.

 

 

Post-doctoral researcher on the project, Yan Yao, explained the benefit, "A micron-thick flat film of solid nanocrystalline-silicon can take a few hours to deposit, while nanoshells achieving similar light absorption take just minutes.... This is a new approach to broadband light absorption. The use of whispering-gallery resonant modes inside nanoshells is very exciting. It not only can lead to better solar cells, but it can be applied in other areas where efficient light absorption is important, such as solar fuels and photodetectors.”

 

 

Additionally, the construction of the nanoshells allows for quick production and many new possible applications.  The material is highly efficient even under heat. The thin construction allows for flexibility which is currently not possible with solar panels. This technology brings a lot of potential for future applications in high-efficiency sun collection industry. So many players in the solar game, if they would only work together.

 

Cabe

http://twitter.com/Cabe_e14

dys.JPG

Dysprosium in the raw (via periodictable.com)

 

China dominates world production of rare earth minerals. However, China has been cutting back on production due to environmental and domestic labor effects. Raising prices was the only option, but they are potentially creating a monopoly on the material. In mid 2011, China increased prices 10 times what it was the going rate at the beginning of the year. What else would one suspect when one place controls 95% of a material.

 

 

This has been forcing companies to find ways to produce products without rare earth involved. A large concern, dysprosium, is mainly used in high powered magnets and is very useful in data storage applications such as hard drives. Companies in Japan are working on new ways to create applications without using the rare earth while others are working on recycling methods to extract the rare earth from used appliances. Japan's $65 million USD plan is to cut dependence on dysprosium by 30% over the next 2 years.

 

 

In the United States, looking to avoid dependency on rare earth imports. Pacific Northwest Lab is working to find a rare earth magnet replacement using a manganese composite material and advanced algorithms to formulate various metal compositions not involving rare earth. Additionally, many research projects are finding crystals and alloys that will eliminate the need for neodymium used in many of today's tablets and smart phones. Cutting edge research in the United States will soon bring us alternatives to rare earth electronics using readily available inexpensive resources.

 

Cabe

http://twitter.com/Cabe_e14

Internet rumours are suggesting that Apple is currently putting the final touches to preparations for the eagerly-awaited launch of the iPad 3, which is apparently set to go on sale in March.

 

Technology blog All ThingsD, for example, has claimed that the launch event for the iPad 3 will be held in the first week of March. And if those rumours are to be believed, it is expected that the third generation Apple tablet will go on sale a week or so later.

 

Speculation regarding the launch date follows shortly after a photo that appeared to show the back of the next iPad leaked on the internet. Although it was difficult to draw too many ideas about the features that would distinguish the device from its predecessor, the photograph has been taken as proof that the tablet had already entered production.

 

And, quite predictably, this has ignited rumours about all aspects of the iPad 3. The New York Times, for instance, has speculated that the new handheld device will boast the same proportions, but with an impressive new screen. Indeed, the news source asserted that the iPad 3 will feature a screen that is double the resolution of that attached to the iPad 2. The new device is expected to be powered by an A6 chip, while the iPad 2 is powered by the A5.

 

Looking forward to the next few weeks, the NextWeb speculated: "Apple's pre-event weeks are often spent soliciting demonstrations from many app developers and preparing demonstrations of those apps for the live event. Our sources tell us that this selection process is continuing at an increased rate as Apple looks to finalize the lineup for the iPad 3."

 

Apple, of course, never announces anything before its official events, which usually serves to intensify speculation and debate surrounding the features of a new device.

 

Despite this, consumers will have to wait for the official event before any of the speculation is confirmed as fact. Fans will be keen to learn whether the third generation tablet will boast improved cameras, or Siri - Apple's voice-controlled personal assistant.

"When you have a very tight lattice match, light generation happens far more efficiently... It really leads to LED 2.0 and a whole new disruptive technology curve." - Soraa CEO Eric Kim

 


 

● The Centre for Quantum Devices at Northwestern University, Evanston-Illinois, created the first UV HVPE-GaN (galium nitride) substrate LEDs in 2002 while experimenting with different materials. Increased efficiency and heat-dissipation were noted.

 

● Panasonic launched the industry's first white LED using a GaN substrate in 2007. The extremely expensive lamps were sparsely used. 

 

● In December 2010, Ostendo Technologies and Technologies and Devices International grew a LED structure onto a GaN substrate. This ended up in a 2.5x emission intensity increase. In other words, a energy efficiency increase compared to similar LEDs with the same luminosity. Their tech is used now in LED TVs, consoles and Blu-Ray players, automotive lighting, and solar cells.

 

bulb.png


MR16 GaN on GaN LED (via Soraa)

 

● Now in 2012, "startup" company Soraa Inc has just announced their mass-produced MR16 bulb, a 12.5-watt GaN based luminary able to replace a 50-watt halogen fixtures in store and museum lighting applications. Soraa's LED also places the GaN light emitting semiconductor material onto a GaN substrate. Soraa labeled and trademarked the tech as "GaN on GaN™" LEDs. Matching the materials between the layers create uniformity within the entire LED system. In other words, with less imperfections the Soraa LED can handle more current, and produce more light at any given power level. Traditionally, LEDs are manufactured with silicon carbide or sapphire substrates. The lattice mismatch between the active material, GaN LED, and substrate result in a loss of power and efficiency.

 

soraa.JPG

(via Soraa)

 

The GaN on GaN™ comes with one glaring disadvantage, cost. No price on the various lamp options, but Soraa CEO Eric Kim said, " If they could buy our bulb for a price point less than $25, their payback period is less than a year. At that price point, it's a no-brainer."

However, Paul Scheidt, marketing manager at competitor CREE, stated that making a GaN substrate LED would result in a cost "on the order of 50-100 times more expensive than an equivalent sapphire wafer. So, while the wafer cost doesn't matter too much in the world of GaN-on-sapphire LEDs, it definitely would be a major expense for GaN-on-GaN."

 

No true price has been announced. However, the first run of MR16 lamps will be available in the first quarter of 2012.

 

Soraa was founded in 2008 by a very able group; Dr. Shuji Nakamura, inventor of the blue laser and White LED,  Dr. Steven DenBaars, founder of Nitres, and Dr. James Speck of U.C. Santa Barbara's College of Engineering. Together, they were able to raise over $100 million USD in investment capital before solid prototypes were produced.

 

Although, what they are producing is not exactly new, they are one of the few who produce the tech on such a massive scale. Bringing the efficiencies to the large scale lighting space is the beginning of full LED adoption.

 

Cabe

http://twitter.com/Cabe_e14

 

See more lighting innovations in element14's Lighting Group.

STCF04_p3268big.jpg

STCF04 (via STMicroelectronics)

 

Taking decent indoor pictures with a smartphone is almost impossible in low light conditions. Think about it; most of the time you have to take the same picture at least twice. The first one was blurry, and the second pic turns out dark because the flash on your phone (if it even has one) isn’t bright enough in low-light conditions. However, there is hope on the horizon with the help of STMicroelectronics new chip dubbed ‘STCF04’. The chip is actually a combination camera flash with a torch controller that raises the LED/flash module up to an astounding 40 watts of illumination ( 320mA current). This is in comparison to today’s standard of just 4 watts that ST states, “produces the same amount of light as a security flood lamp”.

 

The STCF04 uses a high-current MOSFET switch over a lower rated switch (currently housed in today’s generation of smartphones) along with a supercapacitor and high-power white LED’s that ST says can also be used as emergency flash lighting. With the help of the torch controller, users will be able to select 12 levels of brightness along with 8 levels for the flash controller to fine tune the users lighting needs. The chip is already being sampled by companies such as Murata which produce high-quality supercapacitors and OSRAM, makers of LED’s and solid-state lighting. Full production of the SCT04 will begin this quarter of 2012 and will sell for $2.00(US) for companies that buy a 1000 or more. With the STCF04 in the TFBGA package ( 3 x 3 mm) means that we can expect to see the STC04 in the next generation of smartphones. We will no longer have to explain the darkness of our pictures as ‘mood lighting.'

 

Cabe

http://twitter.com/Cabe_e14

Solar energy collection might just be the best alternative energy solution. It collects energy from outside the earth's ecosystem, instead of stealing the kinetic energy from the planet itself. If we are going to do so, collecting more of that energy is paramount. Up until now, commercial solar energy collection peak at around 15% light to energy transfer efficiency. Those standard returns have been shattered by a couple of companies out of the USA.

 

Yablon-solar1.jpg

Alta Devices solar panel (via University of California at Berkeley)

 

Founded in 2007, California based Alta Devices has collected well over 72 million USD in investment funding. At the Photovoltaic Specialists conference (PVSC37) Alta Devices demonstrated single-junction solar cells, made of gallium arsenide (GaAs),with a conversion efficiency of 27.6%, gaining them the world record for conversion under 1 Sun illumination. Maximum to date form Alta was 28.2%

 

Alta Devices co-founder and University of California at Berkeley Professor Eli Yablonovitch explained the tech behind the world record at PVSC37, "Up until now it was understood that to increase the current from our best solar materials, we had to find ways to get the material to absorb more light. But, the voltage is a different story. It was not recognized that to maximize the voltage, we needed the material to generate more photons inside the solar cell. Counter-intuitively, efficient light emission is the key for these high efficiencies.”

 

The photovoltaic (PV) boost from Alta Devices was later evaluated by the National Renewable Energy Laboratory (NREL) to have a consistent 23.5% conversion rate. To go along with the boost is an Alta invented manufacturing process where one micron thin GsAS layer, making for an extremely flexible solar cell. This process brings down the cost of the solar panel substantially. With price and efficiency into consideration, the Alta Devices cell is a close competitor to fossil fuels, even without subsidies.

 

The theoretical maximum efficiency for single junction cells is 33.5%, also known as the "Schocley-Queisser Limit." Alta strives to get even closer in the coming years.

 

nrel-semprius.jpgtech_mtp_stamp2.jpg

(Left) Semprius solar panel (via NREL) (Right) Die placeing solar dots on a substrate (via Semprius)

 

On the other side of the country, North Carolina based Semprius gains a 33.9% PV module efficiency (850w/m²). Semprius employs high-concentration PV (HCPV) Triple-junction GaAS cells. Even with the ability to print solar cells as small as a sentence period, Semprius is burdened with the high-cost of triple-junction cells. Never the less, it is the first time that over 1/3 of the sun's energy has been converted. The NREL recorded that with the concentration of 1,000 suns, the triple-junction was able to convert 41% of the energy.

 

Semprius CPV applications engineer Kanchan Ghosal  explained, "We're using a completely different approach to what has been practiced. This approach uses micro-cells and transfer printing to significantly reduce the use of materials in highly concentrated PV modules. And it provides a highly parallel method to manufacture the module, based on established microelectronics processes and equipment."

 

After inventing a way to lower solar-cell printing at Semprius, Seimens bought a 16% stake in the company. Coupled with over $38 million in other investments, Semprius make make a solar splash yet.

 

Either way one goes, there is a sizable return on investment cost with solar energy. Up to 5 years to pay off the cost of equipment is standard.  The efficiency boosts lower that time, initial cost still remains the number one hurdle to a wider adoption of solar.

 

Cabe

http://twitter.com/Cabe_e14

InductionPoweredMedicalDevice.jpgA wireless medical pressure transducer that Dr. Joshua Medow first prototyped in his home lab a few years ago is now being tested in sheep and could be tested in human patients within two years.

 

The device is intended for patients with hydrocephalus who have a shunt implanted to reduce  cerebrospinal fluid pressure.  When these patients have symptoms as simple as a headache, there is no easy test to determine if a failure of the shunt is responsible. 

 

Dr. Medow had an idea of implanting a pressure sensor in the brain and transmitting the data to a receiver outside the body.  He was knowledgeable about electronics, so he built a proof of concept using through-mount parts on a breadboard. 

 

The receiver provided power to the transmitter inductively by applying 60Hz AC to a coil.  The transmitter used a bridge rectifier and a linear to generate 15V DC.  He used this voltage to excite a strain gauge and to power an op-amp circuit and a voltage-to-frequency converter.  The output of the frequency converter was a function of the strain gauge deflection and was in the 800Hz range.  He did all of this with very old-school parts such as LM7815CTLM7815CT, LM324, LM331.  These are all parts I used frequently in the 90s when I started doing electronics, and they were already twenty years old at that time. 

Intercranial Pressure Sensor Block Diagram.jpg

 

This simple proof of concept led to prototypes that integrate all the circuitry in one piece of custom silicon and use MEMS to measure pressure.  The new pressure monitor transmits at at a higher frequency and uses the same antenna to transmit its data and receive power. 

 

It would have been easy for Medow to dismiss the project when he conceived it because he didn’t have the resources to create custom silicon.  Instead he built a prototype.  That led to a collaboration with engineers at the University of Wisconsin to produce a more advanced prototype.  Last fall, three years after the early prototype he was featured on the front page of the Wisconsin State Journal.  Now the technology is being tested in animals and is a few years away from human trials. 

 

Necessity breeds innovation. Clothing designers have two approaches to designing garments; imagining how it will look or physically making the design to see how will turn out. University of Tokyo, Japan, Amy Wibowo falls into the later category. After spending the large sums of time to create pieces just to be thrown aside, she set out to create a way to virtually design. The result was "Dress Up Clothing Design System," and augmented reality program that can let a designer create in 3D space.

  

 

No colored pencils or cutting tools are necessary. Only the system's surface and a couple styluses are needed. There are six ceiling-mounting cameras that are placed to watch over a mannequin (dummy) and the designer. The user has two wireless mice attached to a frame alongside tracking spheres. When you start designing and moving the dummy, the cameras pick up the movement of the styluses and the dummy position, and transfers it to the system to see your design come to life. In this case, Wibowo projected the design image on a screen.

 

  

After the design is finished, the software makes "patterns" out of the design for physical construction. Patterns are the shapes that sections of fabric are cut to be sewn into a complete garment.  Some could argue this is lazy for fashion students who have to learn how to make patterns. Unfortunately for purists, Wibowo has already created the software. The ripple affects soon to follow.

  

 

The Dress Up Clothing System is ideal for most people to set up in their house. Wibowo considered that fact when thinking about one day creating a similar system using something like a Wii remote.

 

  

Wibowo stated, "The idea is to make it easy for people to design clothes.” It does make it easier. The system may be fast, but would you agree it bypasses the whole learning process of making clothes? Before we know it, six year olds will become renowned fashion designers. That is the point of technological advancement, right? Anything is possible.

 

  

See Wibowo's software in action at the TEI conference in Kingston, Ontario February 19-22.

 

 

Cabe

http://twitter.com/Cabe_e14


pirate-bay-ship-dark_display_medium.jpgFirst off, it is always bad to fear things that cannot be controlled.  Physibles (data objects that can become physical objects) will become a reality on some level. If it ends up being possible and prevalent to print running shoes, piracy prevention measures will likely be as effective as DRM has been for music and movies.  So let's all just calm down about how to stop the technology and think about how to use it.

 

Second, there isn't much of a reason for manufacturers to fear this new technology.  It is crazy to think that it will be cheaper and easier for most users to print a product at home instead of buying it and having it delivered.  The best kind of profit to be made is from leveraging high volumes to bring down prices and complexity, then using that margin to recoup development costs.  That business model is the reason that electronics manufacturers used to publish full service manuals containing schematics without fear of being undercut.  This practice is largely hindered due to increasing costs and the vulnerability from off-shore manufacturers copying the design and competing with the same volumes at lower costs (a big problem, but not related to physibles).

 

There are, however, some manufacturers that should be shaking in their boots at the notion of people printing parts at home.  Companies that try to turn inexpensive ABS plastic parts into a profit center by charging huge margins are all too common. When these companies find their oversized margins demolished by physibles, they can only blame their demise on the fact that they decoupled their income sources from the value they bring to customers.

 

makerbot.jpgThe last reason manufacturers should not fear physibles is the position of power that they currently hold – a truly unique situation in a truly unique time.  They have the benefit of hindsight from how piracy affected the music and movie industry.  Media companies tried desperately to hold on to their high-margin boxed media products even though it went against the grain of what the customer really wanted.  This only drove potential customers to piracy as the only means of downloading content.

 

Now that the digital media dust is settling, companies are finding that there is a large market for digital media that competes on convenience, legal, and moral grounds. But since they left it to others to create an online store offering digital media when the market emerged, Apple and Netflix take a (rather large) piece of the pie.  Compare that to where part manufacturers are today.  They have the technical drawings, staff, and revenue that can be used to develop a way to offer part information to their customers directly.  Selling the physible in an easy, legal, and reasonably priced way while 3D printing technology develops would be revolutionary.  The 'factory-direct' approach could allow them to have a price low enough to compete with the free, illegal, and less convenient Pirate Bay.  There will certainly be pirates copying the design for free, but it may end up having the same impact shoplifters have on stores; unfortunate, but tolerable.

 

As with most disruptive technologies, approaching physibles with fear can only be good in the short term. Hopefully the people working hard to invent the next great widget will be paid for the value they create, even if on a totally different business model.

biomask_concept.jpg

Biomask concept (via UT Arlington)

 

Being enlisted into the armed forces signs everyone up to a high risk of injury. Face damage is one of the most debilitating injury, and special care has to be taken. Soldiers that are injured in the face now may have the option to bypasses surgery and use a Biomask to repair their wounds.

 

Currently, the procedure to repair the face involves removing the damaged areas and grafting in new skin. With this procedure, you run the risk of having deformities, speech problems and scarring. The Biomask does not involve going under the knife but by just wearing a mask. The mask speeds up the healing process of disfiguring facial burns and helps rebuild the face.

 

It is a polymer mask that has electrical, mechanical and biological components built right in that allows the magic to happen.  Actuators press the mask to the patient's face. Onboard arrays of sensors give feedback on the healing process as well as a guideline on how to handle certain areas. "Localized activation of treatment" can be administered as the system decides. A network of "micro-tubing" administers antibiotics, pain killers, and stem cells (for rapid re-growth) onto specific areas of the face. Although the mask gives 24/7 healing, no specific time-table was set for the average repair time.

 

The criticality of developing a medical device runs deeper than any other. There were many heads being joined together to make sure it is not only effective but safe as well.  The head commander of this project is Eileen Moss who is an electrical engineer and research scientist at UT Arlington Automation & Robotics Research Institute. Her partners consist of the Army Institute of Surgical Research at the Brooke Army Medical Center and Northwestern University.

 

The Biomask was funded by a research grant from the U.S. Army Medical Research & Materiel Command of $700,000. Not a lot to ask for just a revolutionary healing system.

Northwestern University is hard at work studying the wound healing while UT Arlington team is focused on the developing the Biomask prototypes. The prototypes are going to be tested on other collaborators first so that Moss can use the results to the best of its abilities to improve the mask before being fully released. The goal is to get the Biomask assessable to our soldiers within 5 years.

 

Cabe

http://twitter.com/Cabe_e14

 

Pico Projectors are a useful feature to add to a cell-phone. Take any random bus stop work wide. You will see a large percentage of people straining their necks. The bent posture follows even in the elevator, living room, or one's desk at work. The Pico projector based smart phones have a dual purpose; by helping straighten up your posture while allowing you to view the phone beyond the small restricted screen. But that is the end of its capabilities.

 

A team of researchers has taken the projector concept to the next level, interactive virtual projection (VP). Dominikus Baur, Sebastian Boring, and Steve Feiner (University of Munich, Calgary, and U of Columbia New York, respectively) set out to make content manipulation more useful. Their system used a centralized server that handle all data from the "projecting" cell phone and a monitor acting as a "projection screen." While projecting an image, the phone's camera takes screen shots while the server compares them to the monitor's images in real time. The screen synchronizing even works with video. This creates a virtual 1 to 1 movement between devices. The server can also handle multiple phones on the same monitor. Orientation of the phone, tilt back and forth, which causes distortion on regular projectors, is ignored for VP.  (The system used unmodified iPhones and a Windows i7 based server.)

 

vp_flow2.png

Concepture drawing (via Dominikus Baur)

 

Connectivity between the VP and applications, such as navigation, is the near-future goal of the technology. Grab a section of a map on the phone allow for quick portability of the navigation information. As the team said, " Give it a few more years (and a friendly industry consortium  and this could become reality."

 

Cabe

http://twitter.com/Cabe_e14

(Above) Video of the mouse's brain operating (via Max Planck Institute of Biophysical Chemistry)

 

Many methods have been used to study and understand the brain. These methods have limitations that restrict how well the micro structures can be seen and how they behave in real time. Now, scientists at the Max Planck Institute of Biophysical Chemistry in Gottingen, Germany, have developed a new nanoscopic scanning system to observe previously impossible resolution of the structures of the living brain of a mouse.

 

 

To achieve this unprecedented resolution of live brain images, the team developed a stimulated emission depletion (STED) fluorescence microscope. Images come from a 1.3 numerical aperture lens which focuses an 80 MHz sequence of 70 ps pulses of 488 nm wavelength light through a glass-sealed hole located on the skull of the mouse.

 

 

The mice are given an extra gene that causes a yellow glow to the cells of the brain which are stimulated and detected by the STED microscope. This allowed researchers to view the living brain of a mouse with a resolution of 70 nanometers.

 

 

The STED microscope focused on the cerebral cortex of the mouse’s brain, an area that controls movement.  The neurons and dendrites where observed as they move to make connections with neighboring cells, perhaps capturing the process of the mouse thinking. A challenge in achieving the high resolution of the neuron structure was to minimize vibrations. For this reason, the mouse was anaesthetized and even it’s vital functions were performed artificially. The resulting images were also filtered using emerging super resolution techniques.

 

 

This type of technology could enable researchers to diagnose connectivity problems that happen in mental disorders like Parkinson’s or dementia, by creating these disorders in the mice. Further developments of STED microscopes could penetrate deeper into the brain and implants could one day provide images of a conscious animal’s brain, functioning in real time.

 

 

On a related note, another group of researchers are working on constructing video of people's thoughts and dreams.

 

 

Cabe

http://twitter.com/Cabe_e14

 

Animals certainly carry a heavy burden in science. Engineering On Friday's take on the subject.

 

Despite ongoing concerns over the state of the wider US economy, a new study has revealed that technology companies based in Silicon Valley are succeeding in bucking the trend. The residual impact of the banking crisis in the late 2000s has, of course, had a profound impact on employment levels and economic growth in the US. It does not, however, appear to have extended too far into the country's technology market.

 

That's because new data shows that employment rates in the software engineering profession is rebounding significantly faster in most other areas of the US labour market. Incomes, meanwhile, have also been revealed to have risen for Silicon Valley residents, especially those who make more than $100, 000 per year, Joint Venture Silicon Valley said.

 

The survey, conducted by the not-for-profit group, also confirmed that the proportion of Silicon Valley residents with health insurance and children graduating from high school is some way above California and national norms. In fact, the Valley accounted for 15 percent of California's total income tax revenue.

 

By contrast, the 57 percent of Valley residents aren't doing so well, with their troubles mirroring those seen in the wider US economy. The amount of new residential developments classified as affordable was, for example, the lowest in the past 14 years in 2011.

 

Russell Hancock, Chief Executive Officer of Joint Venture Silicon Valley, commented: "Small businesses are clearly not out of the rough. The public sector is still in the throes of a fiscal crisis, and median household income continues to fall as the gap between those succeeding and those struggling grows wider and wider."

 

But for the Valley's better off and better educated, last year was a "bonanza", according to Mr Hancock. "It's as if we're becoming two valleys," he observed.

 

The latest Silicon Valley Index shows that there are currently three million inhabitants of the Valley, 37 percent of whom are foreign born.

Cockroach_ACS.jpg

The cockroach fuel-cell volunteer (via ACS Publications)

 

The living-cell battery from the Matrix movies is real, but not for us humans.

 

Scientist Daniel Scherson and his team from Case Western Reserve University, Ohio, have take to immobilizing insects and turning them into living fuel-cells. Using a "False Death's Head Cockroach," also know as a discoidalis,  the team was able to insert electrodes made of thin carbon fibers sealing in a glass capillary tube into incisions made in the insect. The biofuel cell using the roach's "trehalose" sugar as a fuel mixed with oxygen to generate electricity. The outcome maxed out at approximately 55 μW/cm2 at 0.2 V.

 

ja-2011-10794c_0005.gif

The Trehalose based fuel-cell. "bienzymatic trehalase|glucose oxidase trehalose anode and a bilirubin oxidase dioxygen cathode using Os complexes grafted to a polymeric backbone as electron relays was designed and constructed." (via ACS Publications)

 

Although the glass tube,  two pins through the pronotum, two more pins the posterior of the abdomen, and a series of staples to hold the cockroach down did no significant damage to the insect's critical organs, we can all assume it was not a pleasant experience for our insect friend. There may be a reprieve for the living creatures on this project. The team was able to achieve similar results with the same procedure on a Shiitake Mushrooms.

 

The team stated that the goal is to power micro and nano devices with a semi-recharging battery. (ie: The insect eats, and then recharges its core, so to speak.) Research funding was provided by the National Science Foundation. Read about the whole project after the link.

 

It is a shame that creatures have to suffer so much for our benefit and gain.

 

Cabe

http://twitter.com/Cabe_e14

 

Entering an industry with already high competition was no deterrent for Recon Instruments. Recon just announced a partnership with Smith Optics and SCOTT sports over MOD sports goggles. They are attempting to bring augmented reality in a heads-up-display to competitors, will they succeed?

 

An example of the onboard HUD (via Recon Instruments)

 

The MOD system can be immensely helpful when ridding down new territory. The goggles are equipped with GPS, accelerometer, gyroscope, altimeter and temperature sensors to provide skiers and riders with precise speed, jump, vertical, altitude, location, distance, and temperature measurements and readings in real-time. The "MOD live" system has the largest database of trail maps in the entire world already preloaded right into the goggle. No worries about bringing a music player either, the system has a built in Music Playlist Mode. MOD plans to unlock a camera connectivity app for the use of point and view action in May 2012.

 

Recon Instruments is no stranger to the goggle business. They partnered with industry giant Zeal Optics over a series of GPS-enabled goggle sets. The Transcend and Z3  have almost the same specs touted for the MOD system, but are available now.

 

 

Zeal Optics is embarking down the HD camera capture route with their new iON goggle tech. iON goggles have the ability to capture HD video and photos while having the "time of your life down the mountain." Connectivity and sharing with social networks is the main focus. Storage capacity is limited to the size of the micro-SD card installed in the goggle set.

 

What makes iON goggle so popular is the ability to capture your flight down the mountain in remarkable quality. Making this happen is a 1080p True HD video camera. The camera captures real-time video and sound while able too snap 8-megapixel photos. Simply press a button on the side of the iON goggles; picture taken. The iON not only uses a high-quality camera but also uses a 170-degree wide angle lens so it does not leave anything out of the picture. I am not a fan of the extreme-sport fish-eye lens look, for the record. Zeal Optics claim you could use the camera up to six hours of shredding per charge. (External battery possible Zeal?)

 

All these companies are chasing the elusive all-in-one model, HD video and every sensor imaginable. It seems all these companies are walking hand-in-hand towards that goal, we just have to wait. However, I would like to see an augmented reality competitor in this field. Projecting the perfect launch path follow while snowboarding would turn us all into Shaun White.

 

Cabe

http://twitter.com/Cabe_e14

unicycle2.jpgunicycle4.jpg

Stephan Boyer with his unicycle (via Stephen Boyer)

 

Stephen Boyer, an electrical engineering student at MIT, took his transportation needs into his own hands by creating a motorized unicycle. However, he is leaving the balancing to the unicycle.

 

Boyer explains that the unicycle only balances in the direction of travel, forward or backward, so practice is needed to balance completely. To balance, the unicycle first determines its angle from the gyro and accelerometer feeds into a complementary filter.  The output is feed thru a PID loop at 625 Hz which estimates the correct balancing angle. A MOSFET H-Bridge drives the motor controlled by a PWM signal (1.22 kHz) then after.

 

Then the motor is called to react with a MOSFET H-Bridge, which responds to a 1.22 kHz pulse-width modulation (PWM) signal. The motor controller has an onboard voltage switch regulator that powers the logic circuit and the charge pump needed for the high-side MOSFET.

 

The unicycle is comprised of:

●    A custom MIG-welded steel chassis

●    A 450 Watt electric motor

●    Two 7 Ah 12 Volt batteries

●    A 5DOF inertial measurement unit

●    The OSMC H-bridge

●    An ATmega328P microcontroller

 

 

The circuit highlights:

 

●    Filtering Capacitors on the power rails

●    Reset pin for AVR microcontrollers

●    20 MHz external crystal oscillator

●    IMU connected to ADC pins

●    And indicator LEDs

 

 

The unicycle has a maximum speed of 15 mph and features a kill switch that is held in the rider’s hand and shuts off the motor when the rider lets go of it. Added software serves to detect accidental releases of the kill switch. The batteries last for at least 5 miles.

 

 

Future work includes building a case to protect the circuitry also making an aluminum chassis to lower the weight.

 

 

All of the coding was written and C and can be found on the public domain along with all of the unicycles components after the link. An EAGLE version of the circuit is on the way too. Time to build yourself a self-balancing unicycle! Alternatively, you can buy the 20 mph unicycle from Ryno for $25,000.

 

Cabe

http://twitter.com/Cabe_e14

 

Before many of us experienced augmented reality (AR) on our phone, manufacturers are poised to release the next-gen. Chip designer ARM is the main driving force behind the latest in AR. Their goal is to use all the available processing power our devices can muster. The new augmented reality can scan 3D environments using a phone's camera. In real-time, it can produce an image that is animated or even descriptive.

 

 

For instance, you are looking for a new office for your business. Walking around outside using this feature you may be able to see possible buildings that have offices available for rent or possibly get a preview of what the architectural office layout looks like. This would all be made possible by the camera on your phone scanning the environment and producing a desired picture of the environment with descriptive information. Additionally, it may be used for recreational purposes, such as augmented reality games or educational purposes such as providing historic feedback on relevant locations.

 

 

However, this technology is still in its infancy as developers are still testing and designing the applications. Furthermore, the battery is drained rather quickly using all the processing power to scan the 3D environment. The latest mobile gadget, theSony's PS Vita, has a feature similar to the proposed AR, However, it only projects a sprite or animated picture onto the environment, where as this feature would use the environment and display a new augmented reality picture of your surroundings. AR is in need of an improvement, and ARM is setting the groundwork.

 

Commercial & Technology.pngAR Value Chain.png

(Left) ARM AR goals (Right) Value Chain, how content is produced (via ARM)

 

ABI Research claims that the AR market size of 2010 was near $21 million USD, and will be at $3 billion by 2016. With that in mind, ARM releases the Mali GPU series. The latest being the Mali-T658. Aside from handling the every growing video and gaming demands of users, the Mali series attempts to handle battery consumption on a next-gen level as well. Placing AR elements in a spacial sense for real time video can be a serious burden on the system. The GPU will take the burden off of the main CPU, accomplish the tasks better, at the same time saving power. ARM's CPU and Mali ecosystem has the ability to handle what is to come, and they hope developers will hop on board.

 

 

ARM partner Metaio claims that the next-gen AR will be in every Smartphone by 2014, as well as grow to be a $715 million USD industry. Of course, Metaio is trying to push their AR developing environment "junai Creator" along with the statement. Metaio released the following video explaining the new type of AR they are pushing at the insideAR conference.

 

 

Metaio provides the middleware and presentation of content, now they need developers. See more about ARM in the element14 ARM Developers Group.

 

 

Cabe

http://twitter.com/Cabe_e14

The world's largest vertical telescope, which links four telescopes, has been has been created in Chile by astronomers at the Paranal Observatory. The Very Large Telescope (VLT) is, indeed, the largest single such device on Earth, measuring an impressive 130 metres (424 foot) in diameter.

 

Astronomers hope that the new device will help to give them a much more intimate look at the universe than has ever been the case. That is, of course, largely due to the fact that scientists have never before been able to successfully link more than three telescopes of this kind.

 

Indeed, the team in Chile tried to link the telescopes in March 2011, but that effort ultimately proved fruitless. Now, though, it is hoped that the device, the biggest ground-based optical telescope on Earth, will offer an unprecedented level of spatial resolution.

 

And according to Frederic Gonte, the head of instrumentation at Paranal, its creation marks a "milestone in our quest for uncovering secrets of the Universe".

 

Speaking to BBC, he explained: "It's an extremely important step because now we know that we're ready to do real science. From now on, we'll be able to observe things we were not able to observe before."

 

Recalling the construction process, Mr Gonte observed that the team used an instrument called Pionier, which replaces a multitude of mirrors with a single optical microchip, to link the four telescopes together.

 

Jean-Philippe Berger, a French astronomer involved in the project, explained that this did not work at the first time of asking. This time, though, it was apparent that the instruments were working correctly, he said.

 

"Last time," Mr Beger observed, "the atmospheric conditions and vibrations in the system were so bad that the data was just worthless. We stopped after half an hour knowing that it wouldn't improve.

 

"So, this attempt is the real first one to carry out observations for several hours straight to test the system in different conditions."

 

It has been confirmed by the team in Chile that the impressive new system is to be made available to the entire astronomical community, meaning that anyone visiting the facility in the South American country will have access to the cutting-edge technology.

app economy.JPG

(via TechNet study)

 

Some say the mobile device "app gold rush" is over. Both the iOS and Android markets have the better part of a million applications each, how can there be room for more? I disagree. I think the field is flush with possibilities. So far, 466,000 jobs have been created in the "app economy" business. There is room for more.

 

The App Economy generated $20 billion USD in 2011 alone, according to the TechNet study on the industry. The revenue includes app sales, in app advertising gains, virtual and physical goods sold due to apps. The major contributors to the app markets are not surprising: iOS, Android, Blackberry, Facebook site apps, and Windows Mobile/Phone. (I would say Blackberry may be a dwindling market for the developer, beware.)

 

jobs by region.JPGapp jobs by location.JPG

(Left) App jobs per state  (Right) App jobs per city (via TechNet study)

 

Geographical location was also obvious in the report. California state, USA, takes the crown having 23.8% of the jobs. New York, Washington, Texas, New Jersey, Illinois, Massachusetts, Georgia, Virginia, and Florida round out the top 10 in order. Being close to the OS company in Silicon Valley is a popular choice for app developers, while others want to be near advertising/media concentration in New York.

 

Growth is predicted, by the report, to be significant in the coming years. Between 2010 and 2011, an increase of 45% was seen in the job want ads. If you have the skill, the jobs are plentiful.

 

growth app.JPG

App career growth chart (via TechNet study)

 

With feature phones (dumb-phones) outnumbering Smartphones 4:1, globally, as of 2011, the app market has the potential to grow 400%. Take $20 billion and make it $80 billion to give another perspective. There are 82.2 million Smartphone users in the USA(2011), those numbers will only grow over time. I liken this to the adoption of computers in the home. At first slow, now every home has several.

 

Want to get started? Try the Goolge/MIT App Inventor. No coding needed.

 

Cabe

http://twitter.com/Cabe_e14

 

See the full TechNet study, attached to this post

40327_web.jpg

Road power faux-schematic (via Stanford University)

 

MIT is again at the heart of another technological advancement. This time Stanford University is taking the MIT development to another level, providing power to electrical vehicles via embedded wireless power transfer coils in roadways.

 

MIT created a wireless power transfer technology that can handle 3kWs of power within a few feet. Originally it was for charging EVs while parked. Stanford associate professor Shanhui Fan wants to take the MIT tech to 10kWs at a distance of 6.5 feet. Fan explained his goal, "Our vision is that you’ll be able to drive onto any highway and charge your car. Large-scale deployment would involve revamping the entire highway system and could even have applications beyond transportation.”

 


 

Fan's system would place copper coils in the road surface that are turned to resonate with another coil placed inside the moving EV. With the road coils so close together, there will always be a constant power connection to the road despite how fast one drivers. Postdoctoral scholars Xiaofang Yu and Sunil Sandhu discovered that at a 90-degree angle, attached to a metal plate, a copper coil could transfer 10kW at 6.5 feet. Proving the possibility is one hurdle accomplished. Using magnetic resonance coupling, Fan estimates that an energy transfer efficiency of 97% would be needed to make it useful. Even for magnetic coupling, that efficiency requirement is a tall order. When it comes to technological advancements, always set the bar high.

 

The Korea Advanced Institute of Science and Technology (KAIST) already has Fan beat. Their road power system is already in operation on the school campus. Although they have only a 80% transfer efficiency, they are applying 30kW to the source. Perhaps the Stanford team should take some queues from KAIST. There is always Japan's EV road rescue service as a back up.

 

Cabe

http://twitter.com/Cabe_e14

AI mit google.jpgai book.JPG

(Left) Software interface. (via Google) App Inventor book by David Wolber (via Amazon)

 

Not being able to find an app, that can perform a specific task, can be frustrating. The only option is to develop it yourself. Google and the Massachusetts Institute of Technology (MIT) joined forces to help you get started.  The two came up with the "App Inventor" for android software toolset. The software allows your imagination to go wild in the Android app world as quick as possible.

 

If you have an idea for an app, but not very code savvy, don't sweat it. The App Inventor was invented for the pure purpose for users that have no prior knowledge of programming. They created easy to use developing app software for everyone.

 

Do not start whipping out your wallets yet, App Inventor is, according to MIT’s website. However, they said they will one day accept contributions for the software. The MIT team explained the goal, "[We] hope to nurture a robust and active open-source project but for now we don't want to distract the MIT developers from their efforts to complete and deploy the large-scale public server. In the meantime, we'll update the code periodically to match what's running at the latest MIT experimental system."

 

Google decided to shut down their App Inventor service due to recent service call, but the MIT team is still in full force. Download the App Inventor initial free (open-source) release at the project's main page. Not much support is available at the moment, but expect a deluge of examples and help to come in the next few months.

 

App Inventor uses the Apache License 2.0, which does allow for the selling of apps created with the software. Other restrictions may apply. Despite all this, App Inventor looks like an easy way to get started in the app-creation world.

 

Cabe

http://twitter.com/Cabe_e14

hiriko.jpgkirako 2.JPG

Hiriko concept images (via MIT)

 

Double saving with this car! You can save money and save room. Hiriko ("of the city") is the name of the double whammy car that can not only fold itself but runs on electric. The inventor is Jose Manual Barroso who is the president of the European Commission, in Brussels. Barroso is working on the project with the Spanish government and the USA's MIT Media Lab. Their goal is to have the Hiriko on Spanish streets by 2014.

 

Being a folding, do not except much room. The rear wheels simply fold right under the chassis, compressing the rear section forward, folding vertically. This makes the car only two-thirds the of the floor real estate of the Smart ForTwo. In other words, it is small when parked. There is only one door to get in and out this two-seater. The last car that opened up in the front was not much of a success, let us hope for the best with this one.

 

Hiriko’s power comes from a four in-wheel motor. Each wheel is independently driven and is steered by the “robot” electric motor. The oddest design feature stated is the system can tug at the drivers fingers via haptic feedback in the steering wheel. Aside from the haptic traditional shaped steering wheel, a joystick control will also be an option, which is undeniably a throw-back option similar to early model automobiles.

 

Unfortunately, you cannot get your hands on one. Only the 20 prototypes are rolling out to street testing in various European and American cities so far. However, in 2014, expect a price tag for the EV Hiriko to be in the $16,000 range.

 

Cabe

http://twitter.com/Cabe_e14

With less than 200 days to go until the start of the London 2012 Olympics, the attention of many of the businesses located in the UK's capital city are starting to turn their attention to the logistical challenges posed by hosting the world's largest sporting event.

 

The long-term concern for Londoners surrounding the Games regarded how the city's transport infrastructure would cope with the stresses of moving millions of people around London over the course of two weeks.

 

However, a new, perhaps even more terrifying, potential problem has reared its head, with a government report suggesting that the country's telecoms system may be unable to cope with demand to access the internet in certain areas.

 

http://aperture.adfero.co.uk/Image/Original/14027890

 

The Cabinet Office's official advice, which is detailed in Preparing your Business for the Games report, implores UK firms to help ease demand by pushing the concept of flexible working, which would reduce stress on the telecoms system and on the transport network.

 

"It is possible that internet services may be slower during the Games or, in very severe cases, there may be dropouts due to an increased number of people accessing the internet," the report reads.

 

Internet service providers, meanwhile, have been warned that they may be forced to "introduce data caps during peak times to try to spread the loading and give a more equal service to their entire customer base".

 

This statement has, unsurprisingly, prompted fears that major businesses in the UK - many of which are headquartered just a few miles away from the Olympic stadium - may witness a significant slowdown in productivity.

 

In preparation for the Games, firms are being urged to conduct feasibility studies into how best to cope during the event. Organisers of the Games have, for their part, already warned that they expect as many as 800,000 spectators and 55,000 athletes, officials, organisers and press to travel to and from the venues every day.

 

And while this is the third time London has staged the modern Games, having done so before in 1908 and 1948, it is fast becoming apparent that advances in technology are creating new problems for organisers.

(via AT&T)

 

It is easy to forget that there was a time when data communication was vastly an unknown and abstract topic. A relic of the start of this digital era was found in the AT&T archives. It was recently released so that those who were not around to experience the paradigm shift, could at least marvel at the primitive history that was robotics in the 1960’s.

 

 

Jim Henson, a movie director who eventually would work on Sesame Street and the Muppets, created a short film of a little industrial robot to reify the concept of data transfers and communication to business people attending Bell System’s, Bell Business Communication Seminar.

 

 

Ted Mills of AT&T, at the time, sent Henson a memo describing the concept he wanted for the film. It read, "He [the robot] is sure that All Men Basically Want to Play Golf, and not run businesses — if he can do it better." Henson went a little deeper.

 

 

In the short film, titled "Robot," Henson communicates the immense potential of computerized systems in a slightly dark comedic tone surely to intrigue any one attending the seminar. The robot explains its affinity for “digesting vast oceans of information” as well as its contempt for emotional humans, which, in its view, serve little purpose for the new robotic race.

 

 

Apart from its technological hubris, the robot explains that his potential is shortened by man’s incompetent imperfect design. I wonder if the message resonated among ambitious business folk, of the day. It was only the beginning of our industrial exploit of a digital age. Luckily, it did not turn out to be a robot controlled dystopian future.

 

Element14 User Jim Hayden suggested another Jim Henson - AT&T gem. I find it interesting that the AT&T hierarchy of the time felt that puppets were the only way to get CEO accustomed to computerized technology. I suppose it worked. See below:


 

 

 

Cabe

http://twitter.com/Cabe_e14

gun2.jpg

Xappr (via MetalCompass)

 

A new attachment to enhance your smart phone gaming is on the way from the company MetalCompass. The Xappr is a gun shaped attachment that communicates with your phone. It works by placing your smart phone in flexible clamps. Xappr interacts with your phone to give users an enhanced game experience. If the user pivots,  the game screen tilts along with them, creating a more realistic feeling.

 

 

This expands gaming capabilities for smart phones and improves on the much lacking shooter games currently available. Augmented reality games can be created using the camera on your phone and create fully interactive environments. One such possibility would be a fully functional laser tag game with friends. The Xapper is due out around June and will cost a paltry $30 USD.

 

 

There will be two options when released in spring 2012, the Xappr and Micro-Xappr. Both will be compatible with iOS, Android, and Windows Phone devices. Their flagship release title will be an augmented reality deathmatch game called "ATK."

 

 

When the Xappr is released, the first use it will have for me will be in a tear down and analysis. What is under the hood is still wrapped in secrecy.

 


 

 

Cabe

http://twitter.com/Cabe_e14

 

Practicing a musical instrument takes a lot of time, preceded by the loss of a sizable pile of money. To parents, they can be a double edge sword as they can be expensive and loud but also a considerable skill and hobby to acquire if their child is truly interested. This is the case unless the parent is a tech-savvy engineer. In which case, the perfect solution has already been built for kids that want to practice the drums.

 

 

Ian Cole was able to make an electronic drum set using the "Drum Kit - Kit Ai" all included (DKKAI) from SpikenzieLabs and some misclanious hardware from IKEA and the corner-store. The Spikenzielabs drum machine is an ATMEGA168-based kit that includes piezoelectric sensors. These sensors can be placed on any makeshift drumhead. Using the SpikenzieLabs DKKAI Roadie, programming the sensors becomes even easier. This DKKAI Roadie is a daughterboard add on that allows the user to designated a MIDI output sound to each sensor and store it in the ATMEGA eeprom directly from the connected MIDI device.

 

 

Cole opted for using Tupperware from IKEA as the drums. He attached the Piezos on to aluminum plates, which were the placed under the lids of Tupperware containers. The PVC piping structure holds the drums in place, and it also doubles as electrical conduit for the wiring.

 

 

His son is now able to play quality electric drums with the use of a MIDI capable iPad and the Garage band software connected to an amp or headphone. Lets hear it for tech-savvy parents.

 

 

Everything you need to know about the SpikenzieLabs drum kit can be found after the link.

 

Cabe

http://twitter.com/Cabe_e14

A new report, conducted by IHS iSuppli, has suggested that the semiconductor industry will expand at a slower pace in 2012, largely due to the ongoing economic downturn and sluggish consumer demand. The research body has said that annual revenue for the industry will hit $323 billion, a growth of 3.3 percent year-on year.
Len Jelinek, director and chief analyst of semiconductor manufacturing research at IHS iSuppli, explained that the semiconductor industry is struggling to grow during what has become one of the longest economic downturns in history.
Owing to the fact that consumers in the US, Europe, Japan and China have less disposable income, sales of popular electronic products, such as PCs, laptops and MP3s, are relatively flat.
Indeed, he observed that underwhelming demand for such products has had a residual impact on DRAM, with iSuppli saying that demand for the memory will fall 16.1 percent this year. This is, however, better than the fall seen in 2011, when demand dropped 26.8 percent, iSuppli said.
"There is one huge wild card in DRAM right now and that is industry consolidation," commented Mike Howard, senior principal analyst at iSuppli. He explained that while supply for DRAM will be up around 40 percent this year on a gigabit basis, this rise will take place in a very soft pricing environment.
Demand for NAND, meanwhile, is being undermined by the rise of smartphones and tablets devices. iSuppli said that although there is rising demand for NAND flash, additional factory capacity to meet the demand is likely to lead to oversupply. Also, it should be noted that NAND flash prices have been falling since 2010.
"Despite robust demand coming from the mobile segment, we are forecasting NAND revenue growth of only five percent in 2012 to reflect the risk of oversupply currently plaguing the industry," Dee Nguyen, analyst at iSuppli, commented.
Overall, however, iSuppli would appear to be relatively upbeat about the state of the semiconductor industry, explaining that while revenue growth is likely to be negative in the first half of 2012, it expects to see the shoots of a recovery thereafter.

A new report, conducted by IHS iSuppli, has suggested that the semiconductor industry will expand at a slower pace in 2012, largely due to the ongoing economic downturn and sluggish consumer demand. The research body has said that annual revenue for the industry will hit $323 billion, a growth of 3.3 percent year-on year.

 

Len Jelinek, director and chief analyst of semiconductor manufacturing research at IHS iSuppli, explained that the semiconductor industry is struggling to grow during what has become one of the longest economic downturns in history.

 

Owing to the fact that consumers in the US, Europe, Japan and China have less disposable income, sales of popular electronic products, such as PCs, laptops and MP3s, are relatively flat.

 

Indeed, he observed that underwhelming demand for such products has had a residual impact on DRAM, with iSuppli saying that demand for the memory will fall 16.1 percent this year. This is, however, better than the fall seen in 2011, when demand dropped 26.8 percent, iSuppli said.

 

"There is one huge wild card in DRAM right now and that is industry consolidation," commented Mike Howard, senior principal analyst at iSuppli. He explained that while supply for DRAM will be up around 40 percent this year on a gigabit basis, this rise will take place in a very soft pricing environment.

 

Demand for NAND, meanwhile, is being undermined by the rise of smartphones and tablets devices. iSuppli said that although there is rising demand for NAND flash, additional factory capacity to meet the demand is likely to lead to oversupply. Also, it should be noted that NAND flash prices have been falling since 2010.

 

"Despite robust demand coming from the mobile segment, we are forecasting NAND revenue growth of only five percent in 2012 to reflect the risk of oversupply currently plaguing the industry," Dee Nguyen, analyst at iSuppli, commented.

 

Overall, however, iSuppli would appear to be relatively upbeat about the state of the semiconductor industry, explaining that while revenue growth is likely to be negative in the first half of 2012, it expects to see the shoots of a recovery thereafter.

Via-SolarDecathlon.gov_.jpg

Solar Decathlon grounds (via DOE)

 

An innovation-stirring biennial competition is travelling to the west coast in 2013 for the first time in its decade-long existence. The Solar Decathlon will be in California this year for the 20+ college teams participating from all parts of the globe. (Normally held in Washington DC) Competition organizers hope to engage a new audience with innovations of technology and design from the 20 U.S. universities competing to make the best solar powered home.

 

The Decathlon put teams of students from all over the country against each other to compete in 10 categories. Using solar energy to power the home is only part of the challenge. The houses must be functionally and cost effective as well as incorporate a modern design. Solar technology progress is made yearly. Increases in efficiency, innovative applications, and the use of organic materials will make the biennial competition exciting.

 

OLED-roof2.jpg

From transparent to a light source, the OLED solar cell (via Phillips)


One example of a possible contender in the event comes in the form of solar cells and light bulbs, combined. Lumiblade organic LEDs (OLEDs) are emerging technologies being developed by Philips and the chemical company BASF. These light sources produce light by running a current through a thin layer of organic semiconductor material. The collaboration of Philips and BASF has produced Lumiblade OLEDs of just 1.8 mm thickness, with materials and dyes that become transparent when light is not being emitted. Furthermore, this OLED can be put between solar cells to capture solar energy. The applications for transparent panels that capture solar energy and emit light can be used through out the modern solar home.

 

Dr. Felix Görth, OLED and Photovoltaics head at BASF, described the tech best, " This combination allows the driver to enjoy a unique open-space feeling while it generates electricity during the day and pleasantly suffuses the interior with the warm light of the transparent, highly efficient OLEDs at night."

 

OLED-roof.jpg

Solar OLED (via Phillips)

 

Innovations similar to the Lumiblade will surely be showcased in the 2013 Solar Decathlon. The Department of Energy’s Secretary, Steven Chu, explained what we can expect, “The Solar Decathlon will unleash the ingenuity, creativity and drive from these talented students to demonstrate new ideas for how families and businesses can reduce energy use and save money with clean energy products and efficient building design.”

 

Cabe

http://twitter.com/Cabe_e14

graphene-3D-wavey.jpg

Graphene sheet concept art from James Hedberg

 

Why has Graphene not over taken Silicon for use in electronics?

 

 

Graphene is a single layer of carbon atoms that are only one molecule thick and have extraordinary characteristics. It is stronger than diamonds, can conduct electricity better than copper, and is impenetrable to gases and liquids. The low resistance it offers can create new and better transistors and circuits. The exceptional conductivity allows electrons to flow quicker than the modernly used silicon transistors.

 

 

However, with the incredible speed also comes another problem. For transistors to work they have to have a distinct on and off state. Creating a transistor with a consistent off state is difficult due to the great conductivity of the substance. Even with sheets as thin as one molecule electrons often filter through when  in the off state. The band-gap cannot get large enough to be effective.

 

 

One man, Konstantin Novoselov, leading a group of researchers is working to create an efficient graphene based transistor. His work on Graphene in 2010 helped him, with colleague Andre Geim, win the Nobel Prize in Physics. Currently they are working to develop a transistor by placing a layer of molybdenum in between two sheets of graphene. The molybdenum is an excellent insulator and stops electrons from passing over while the transistor is in the off state. Further research and experimentation is still needed. Successfully creating a graphene transistor could significantly expand our capabilities with hardware engineering.

 

 

Take the 155Ghz Graphene transistor as an example of the possibilities.

 

 

Cabe

http://twitter.com/Cabe_e14

 

Imagine being able to fit such a tablet into your pocket and not having to worry about reducing the size of the display.  It may soon be possible thanks to researcher Juergen Steimle. Working with faculty at MIT's media lab they have developed multiple tablets that work a bit differently than their traditional counterparts.

 

 

The technology, dubbed FoldMe, works by using infrared cameras overhead to track movement and position of the  tablet surface. The software interface is projected on to the surfaces, using two full high definition projectors to project the image onto the "tablet." Angle of the hinges within the tablet allowing the display to convert from a flat panel display, to a two panel display as if held like reading a book, or if folded completely over a smaller display.

 

 

Hand gestures can be read using infrared markers on the finger nails to give it the touch screen feel most people are used to. The hinges also create new controls that can be used within applications. Since the cameras read the angle of the fold, the angle can be used to control information that normally an on-screen dial would control.

 

 

It appears that this may not work well outside, or off the tablet projection grid. However, this may usher in a new level of connectivity for the boardroom. Later this month, Steimle will present his work at the TEI conference in Canada.

 

 

Cabe

http://twitter.com/Cabe_e14

The European Commission has announced that it is looking in to Samsung's patent deals, with the regulator harbouring fears that the firm used some of its intellectual property rights to "distort competition in European mobile device markets".

According to the Commission, it is determined to establish whether Samsung met its agreement to license key technologies to rivals.

News of the investigation comes at an awkward time for Samsung, which is currently embroiled in patent battles with Apple in various courtrooms throughout the world.

Back in 1998, Samsung declared that it was irrevocably committed to the European Telecommunications Standards Institute to respect fraud terms, which amount to a promise by industry players to license innovations that are critical to an industry standard on "fair, reasonable and non-discriminatory terms".

Under the terms of the Fraud commitments, the owner of the patent cannot discriminate who gets to use its invention. Also, the terms of the agreement state that the fee for the patent cannot be excessive.

"In 2011," the regulator explained, "Samsung sought injunctive relief in various member states against competing mobile device makers based on alleged infringements of certain of its patent rights which it has declared essential to implement European telephony standards."

Over the last few months, Samsung has made more than a dozen patent claims against Apple in Germany, the Netherlands, France and Italy, all of which relate to 3G-essential technologies.

In all of the cases thus far, Samsung has been defeated, largely because it has been judged to have failed to meet the commitments it pledged to in 1998. But according to patent consultant Florian Mueller, the European Commission "can't wait until Samsung finally wins a ruling based on such a patent and enforces it, potentially causing irreparable harm".

The European Commission instigated the proceedings, according to a spokesman, who confirmed that despite speculation to the contrary, it had not received an official complaint from Apple or any other company about the issue.

Vicki Salmon, a member of the UK's Chartered Institute of Patent Attorneys, explained that the official inquiry is likely to complicate matters further for Samsung. "It is really difficult for Samsung to have the commission wading in when none of its competitors have made a complaint," she said.

It is good practice to avoid using one resistor to limit the current through more than one parallel LEDs. Sharing a resistor among LEDs puts the same voltage across the parallel LEDs.  If the V-I curve for the diodes is different, different amounts of current will flow.  There is commonly a good deal of variation in the forward voltage of diodes, even of the same part number, so we use separate diodes. 

 

Suppose we want to set two levels of brightness for a pair of LEDs.  We know we need a separate resistor for each LED, but can we share transistors as in the diagram below

SharedFETm.jpg

At first glance this looks okay because each LED has its own resistor.  I was working on a circuit like this last week.  When I measured the resistance in the circuit, I found them to be lower than expected. 

 

To see why this is, consider the case when D2 is removed. R1 still connects D1 to ground.  But there is another path from D1 to ground: Through R2 to R3 to R4.  So we actually have R1 || (R2+R3+R4) = 200 || (100+100+200) = 133.3ohms.  This becomes clearer if you redraw this circuit (without changing the netlist) this way:

SharedFET_Redrawm.jpg

 

What happens if D2 isn’t missing but simply has a significantly different forward voltage?  How do we analyze this circuit?  We can draw the circuit with Q2 turned off and therefore omitted:
SharedFET_Delta.jpg
We can use the delta-wye transform to generate an equivalent “wye”.  This a the transform I have not used since Circuits I. I’m thrilled finally to have a practical use for it!
SharedWye.jpg

(Note: I used alphabetic reference designators like R[a] in the wye topology and numeric designators like R[1] for the delta.  This is opposite of how my text did it, but I did it this way to avoid confusion with the numeric designators in the original circuit.)

 

For the case where D2 is removed, R[c]+R[a] = 133.3 ohms, the same as we obtained for this case without the delta-wye transforms.  In the wye topology, however, we could now work out the current variations at the maximum and minimum forward voltage of the diodes. 

 

Conclusion:

When I look at the equivalent wye circuit, I see that some of the resistance is separated and some is common to both LEDs.  My first thought is this is about half way between the ideal of each LED having its own resistor and the undesirable practice of two LEDs sharing a single resistor.  I showed this to my colleague Bryan Piernot, however, and he showed me how even with 100s of mV of variation in V[F] among the diodes, disparity due to "wye" resistor sharing are minor compared to the effects V[F] variation with the current paths kept completely separate.  The only significant effect of transistor sharing is if one of the LEDs is removed: Effective current-limiting resistance decreases from 200 to 134 ohms in our example.

 

N-CH FETs cost $0.15 at 1k quantities and use up 2 mm^2 of real estate apiece.  If neither LED will be removed from the circuit, it is fine to share transistors among two LEDs.  If, however, an LED may be removed or switched off, engineers must be aware removing the LED will affect the brightness level of the other LED.

headcans sensor.jpg

Universal Earphones (via Igarashi Design Interfaces Project team)

 

Another instance of "it is so simple it eluded me."

 

Designers from the Igarashi Design Interfaces Project, of the Japan Science and Technology Agency, have made a set of ear-bud style headphones that know which ear they are in. A proximity sensor built into each bud detects where the parts of the ear are located. The right ear will show up on one side of the sensor, left ear will be the other side. According to the picture, it looks as though it only senses one ear, where it will not sense the other at all, hence assuming "left." This detection allows the system to  deliver the stereo-audio channel accordingly.

 

The headphones also produce a "weak electrical current" through the user's head. This is used to detect when an ear bud is removed and shared with another person. Without the current signal, the system will immediately play the music/audio in mono. The goal is that when music is shared with another person, the fullest sound is delivered. This way both can hear all parts of the audio.

 

Extension of the project has its aim set on detecting when the buds are actually in the users ear via skin conduction sensing. That way the sound stream can be started or stopped automatically. The entire project will be showcased at the Intelligent User Interface Conference in Lisbon Portugal, February 14-17. More details as they come in during the show.

 

Cabe

http://twitter.com/Cabe_e14

An overview of the Wind for Schools program (via DOE)

 

An exciting new project in Illinois is looking for middle schools and high schools to partake in an innovative curriculum change. The project is called Illinois Wind for Schools, modeled after the National Renewable Energy Laboratory's (NREL) "Wind for Schools." Illinois has the second largest capacity for wind power in the United States, but it has not received funds from the Department of Energy to participate in NREL’s program.


 

Instead, the Illinois Institute for Rural Affairs and the Department of Engineering Technology at Western Illinois University along with the Center for Renewable Energy and the College of Education from the Illinois State University are organizing their own program with funding from the Illinois Department of Commerce and Economic Opportunity. The goal is to give students a well rounded idea of how weather and energy systems interact, pique their interest in the wind energy field, and to set the stage for Illinois based wind energy projects.


 

Applications are being accepted from schools that would like to participate in the program, which will begin in the 2012-2013 school year. Three to five schools will be chosen. These schools will receive all equipment and models necessary to teach the theory of wind energy and also allow the students plenty of hands on with with the projects. Functional model turbines components, model wind tunnels, testing equipment, weather balloons and weather data collection will be implemented in customizable labs and a comprehensive curriculum at each participating school. The ILWFS program will also run training sessions for teachers.


 

The project is getting a hand from the NREL by being a Wind for Schools Affiliate. These affiliates have access to the NREL’s publications, previous experiences, technical assistance, training programs, informational summits and the Wind for Schools online database.


 

No talk of expanding the program to more schools, but we are sure to learn more when the program has run through some iterations. The chosen schools will be notified April 2. Undoubtedly, this is a necessity of the future, and more schools should follow. Webinars, training classes, and other useful wind energy information is available at the Wind Powering America page.


 

Cabe

http://twitter.com/Cabe_e14

junecam.jpg

Pigeons fitted with Neubronner's various camera system (via archive photography)

 

The world is infatuated with flying robots with cameras. Take the latest toy helicopters, camera connectivity is an essential selling point. Companies announce their technological breakthrough, but they are unaware that the technology is already 104 years old (as of 2012). Pigeons outfitted with cameras took the world by storm in 1908, the product of one person, Julius Neubronner.

Julius Neubronner was a German apothecary in the at the start of the 1900s. His family consisted of a long line of early medical professionals, dealing with all things medicine, from chemical creations to surgery. Neubronner took over his father's practice in 1886. During the early days of the new pharmacy (1902), Neubronner expanded the capabilities by taking up using "pigeon post" for the delivery and receiving of urgent chemicals. A pigeon's maximum carrying weight was 75 grams (~2.6 oz).

 

Julius_Neubronner_with_pigeon_and_camera_1914_cropped.jpg

Julius Neubronner 1914 (via archive photography)

 

Pigeon post was used in high volume during the 19th and early 20th century for private and military correspondence. During the Franco-Prussian War of 1870, over 50,000 microfilm telegrams were sent via pigeon post to Paris, during the "pigeon post of Paris." During that era, pigeons were a tried and true vehicle; an autonomous flying device, capable of long-distance travel, hazard avoidance, and reusability. (not to mention easily reproducible.)


In 1903, some of Julius Neubronner's pigeons were lost in heavy fog, Eventually they found their way home; they were as healthy, and fat, as ever. This inspired Neubronner to attach a camera to the pigeons and record where it has been, tracing its path to destinations. At the time, Neubronner was an amateur photography and film maker, so it was by no long-shot that he would attempt the feat. (Side note: The lost pigeons were in the custody of a restaurant chef in Wiesbaden, hence their healthy condition upon return.)

After experimenting with a Ticka watch camera, a small film camera at the time, Neubronner set out to create a light-weight system for pigeons to carry. He developed a wooden camera model weighing between 30-75 grams that would attach to the pigeon via a harness and aluminum cuirass (chest plate). The camera worked on a time-delay system via pneumatic control. He found the pigeons would return home as fast as possible to have the camera removed, the same method behind carrier pigeon delivery. It was a success. (Neubronner built his dovecote, pigeon house, with an elastic landing board and spacious entry to accommodate the burdened pigeons. He was good to the birds.)


In 1907, he applied for a patent at the German patent office, to only to have the application rejected as being "impossible." In 1908, he produced some photographs taken with the pigeon cameras, and he was granted the patent. "Method of and Means for Taking Photographs of Landscapes from Above" was awarded in December of 1908.


The word spread after the 1909 International Aviation Exhibition in Frankfurt. During the show, people could watch pigeons returning. The pigeon's photographs were then turned into postcards for the audience. Neubronner also won prizes at the 1910 and 1911 Paris Air Show. The final camera system weighed 40 grams and could take 12 exposures.

The most famous photograph was one where the pigeon's wings are seen on either side of the image. See upper left of the image below:

 

Pigeon_photographers_and_aerial_photographs.jpg

Aerial photographs of Schlosshotel Kronberg (top left) and Frankfurt (bottom left and center); pigeons fitted with cameras (right). (via Wiki)

 

Neubronner released a book describing 5 different models of camera on the pigeon platform:

- A double camera with lenses pointing in opposite directions.

- Stereoscopic setup with two lenses pointing in the same direction.

- One model that could transport film and take several pictures in a row.

- A bellows camera that would take a picture and retract the bellows.

- A panoramic camera based on the Doppel-sport panoramic camera. A lens would rotate 180 degrees to take a large exposure. This was never made.

 

Bundesarchiv_Bild_183-R01996,_Brieftaube_mit_Fotokamera_cropped.jpg

Pigeon fitted with a German camera circa WWI or WWII

 

Pigeon camera systems were tested for use in the first world war. Neubronner did have military use in mind when he designed it originally. Tests were conducted by the Prussian War Ministry to satisfactory results, but pigeons were never put into use for surveillance. Neubronner did make a mobile dovecote and darkroom from battlefield use. Even after training pigeons for mobility, the system was never used.

The German army did take the pigeon camera system into the field during World War II. The difference was they trained dogs to carry a set of pigeons to locations for release and recovery. Each pigeon camera was capable of 200 exposures per flight. The goal was to release these behind enemy lines. Whether these were used or not is left to speculation. However, a German nursery toy soldier was produced in the act of using the system. In 1942, the Russian army found a truck containing pigeon cameras that took pictures at five-minute intervals.

 

Brieftaubengruppe.jpgNeubronner_mobile_dovecote_and_darkroom.jpg

(Left) German toy soldier with pigeon releasing. (Right) Neubronner's mobile dovecote

 

Despite the rise to fame and possible military use, the pigeon camera was not a profitable endeavor for Neubronner. He continued his medical practice, and it stayed in operation for two more generations. Neubronner's youngest son, Carl Neubronner, managed the company for 70 years before selling it 1995. Later, Carl Neubronner founded the Carl and Erika Neubronner Foundation to help disabled or needy people and to promote cultural non-profit organizations in Kronberg.

 

451px-Kronberger-burg-museum010.jpg

Neubronner pigeon exhibit (via Stadtmuseum Kronberg)

 

Next camera system you see on a flying toy or UAV, remember, it all started with Julius Neubronner's pigeon camera.


Cabe

http://twitter.com/Cabe_e14

In light of recent criticism from US members of Congress, search engine giant Google has pledged that its new privacy policy will still give its users control of data sharing.. In a letter to the California-based technology giant, the Congress members expressed concern over the fact that users wouldn't be able to opt-out of the new data sharing system when using Google products.
The Congressmen observed that consumers should have the option of opting out of data collection when "they are not comfortable with a company's terms of service and that the ability to exercise that choice should be simple and straightforward".
Google, meanwhile, has already stated its determination to make privacy across its products easier and clearer when introducing its new policy.
Writing for the firm's official blog, Google explained that the new privacy policy explains that, if you're signed in, "we may combine information you've provided from one service with information from other services. In short, we'll treat you as a single user across all our products, which will mean a simpler, more intuitive Google experience."
It had been feared that Google would simply use the data to target advertising and search results to users. Indeed, the Congressmen expressed fears that some Google products and services are more hidden, meaning that Internet users might be unconscious as to what data was being linked to them.
One of the signatories to the letter, Congressman Ed Markey, expressed particularly strong fears over how the new policy would impact on young people, pointing out that search through Google is like breathing for "millions of kids and teens".
However, he praised the new policy, saying that it should enable consumers to opt-out if they don't want their use of YouTube to "morph into YouTrack".
Google, for its part, pointed out that it is not necessary to log in to use a lot of its products, including its search engine. And when users are logged in, Google said that they can, if they so choose, take advantage of the privacy control options.

In light of recent criticism from US members of Congress, search engine giant Google has pledged that its new privacy policy will still give its users control of data sharing.. In a letter to the California-based technology giant, the Congress members expressed concern over the fact that users wouldn't be able to opt-out of the new data sharing system when using Google products.

 

The Congressmen observed that consumers should have the option of opting out of data collection when "they are not comfortable with a company's terms of service and that the ability to exercise that choice should be simple and straightforward".

 

Google, meanwhile, has already stated its determination to make privacy across its products easier and clearer when introducing its new policy.

 

Writing for the firm's official blog, Google explained that the new privacy policy explains that, if you're signed in, "we may combine information you've provided from one service with information from other services. In short, we'll treat you as a single user across all our products, which will mean a simpler, more intuitive Google experience."

 

It had been feared that Google would simply use the data to target advertising and search results to users. Indeed, the Congressmen expressed fears that some Google products and services are more hidden, meaning that Internet users might be unconscious as to what data was being linked to them.

 

One of the signatories to the letter, Congressman Ed Markey, expressed particularly strong fears over how the new policy would impact on young people, pointing out that search through Google is like breathing for "millions of kids and teens".

 

However, he praised the new policy, saying that it should enable consumers to opt-out if they don't want their use of YouTube to "morph into YouTrack".

 

Google, for its part, pointed out that it is not necessary to log in to use a lot of its products, including its search engine. And when users are logged in, Google said that they can, if they so choose, take advantage of the privacy control options.

kinect for windows.JPG

Kinect for Windows v1.0 (via Microsoft)

 

The Microsoft Kinect is a widely popular object tracking camera system first released for the XBOX 360. It allows users to interface with games and menus without operating a controller. After its release, the "hacker" community quickly adapted the tech for use with computers. Simple tracking programs and art based developments soon followed. Due to the popularity, Microsoft started the Kinect Accelerator program, offering large cash prizes for further development with the Kinect.

Flash forward to today, Microsoft has released the Kinect for Windows version 1.0. Included in the release is a SDK and runtime environments. Most notable in the release is a "near-mode" for the new Kinect hardware, allowing tracking to be clear at 40 cm (approx 15 inches). Having one on a desk is completely feasible now. (With the XBOX setup, users have to be several meters away for accurate tracking.)


Improved tracking, controlling up to four Kinect sensors, improved speech recognition, and a driver update system is included in the release of v1 hardware and software. The SDK is meant for companies looking to develop software or other products for the Kinect, but it is available for anyone to download at the moment. Applications written in C++, C#, or Visual Basic in MS Visual Studios 2010 is supported right out of the box.


The Kinect for Windows hardware is planned to cost $250, and it is available now. An educational price of $150 will happen later this year. I sense a mad rush to be the first to market various control interfaces using Kinect. Hop in the dev-train now, or read more about the release at the
Kinect for Windows page.


Cabe

http://twitter.com/Cabe_e14

 

 

How will we look using Kinect for Windows? Find out in this week's Engineering On Friday.

 

 

More Kinect based projects:

Holodesk, the virtual 3D desktop

Control your robot with Kinect

Robotic shopping cart, follows its user around

Control the web and Windows 7 with Kinect

Kinect and Surface, virtual physics engine

Advanced robot tech for the masses

Upgraded humanoid service robot, buy it now

Autonomous robot plays catch

Surgeons of the future might use robotic nurses

 

Three-dimensional printing has only been around since ca-2003 Almost anything can be printed, though at a reduced resolution. Home-use adoption is on the rise, as 3D printing machines become cheaper. One designer created the first pseudo-record without even owning a 3D printer. The process created grooved tracks containing a song. It is the first step towards LP reproduction.

This 3D printed record uses the Fisher Price Record Players music box’s 22 notes, rather than a conventional vinyl needle. The modeling and song were generated using the Processing open-source programming language (Beads library). The designer, "Pittance," modeled the original discs. He figured out the timing necessary to use the players 22 notes correctly.


After figuring out the notes needed to inscribe Jonathan Coulton’s “Still Alive” on to the disk, the code was written. The pre-rendered disc STL file (via the Unlekker library) was sent to the online 3D printing company Shapeways. The returned disc is the one playing in the above video.


The grooves needed to create the quality of a regular stamped record compromise the strength of the plastic disk material makes a 3D print very brittle. Perhaps soon we will be able to create record quality 3D printed models of old or broken vinyl.


Soon, 3D printed models will be under scrutiny of I.P. law protection.
According to the rules of SOPA, PIPA, or ACTA, Pittance could be arrested for creating a contraband Fisher Price record. Print what you can, while you can.

 

Cabe

http://twitter.com/Cabe_e14

Wolf_Katrin_Aug_2011.JPG

Katrin Wolf, lead designer behind Pinch-pad (via Technical University of Berlin)

 

 

The sales of smart phones and tablet computers have been growing exponentially in the past couple years, maximizing our transfer of information from one to another. For many users, the touch screen on these devices gives them a very natural feel to skimming through the internet or traversing their music library. Very soon these smart phones and tablets may be taking advantage of a skill many of us never even knew we had.


Proprioception is a human’s ability to sense where our body parts are located even when we cannot see them. Using this natural skill and a couple of iPads back to back researchers at Germany's Technical University of Berlin, led by Katrin Wolf, have created what they named the pinch-pad. This device reads our fingers and thumbs movements while the device is held and can be used for additional controls. For example, a circular motion with our thumb above our index finger can be used to control display size or perhaps volume. A sweeping gesture towards our pinky finger may be used for action controls in a game. How the new innovative idea may be used is not yet known, however, more natural controls and quicker accessibility are on their way.


The technology will be presented at a TEI conference in Canada in about a month. Although, there have been a few ideas similar to this, mainly the PS Vita with a rear-facing touchpad on the backside of the device, due in late February, none have taken advantage of the idea of proprioception. The natural movements and easier accessibility will soon add to the speed of our information transfer and data processing.


Cabe

http://twitter.com/Cabe_e14


Three of the world's best-known email-service providers - Microsoft, Google and AOL - have backed plans designed to dramatically reduce phishing emails. Phishing is a way of attempting to acquire information such as usernames, passwords, and credit card details by masquerading as a trustworthy entity. The overarching ambition of the new working group formed by a number of leading companies is to stop the flow of phishing emails, which deceive recipients into believing they come from a credible source.
The firms, which also enjoy support from the likes of Bank of America and PayPal, hope to create a more secure environment, where computer users feel secure in the knowledge that none of their mail is a trick.
As a result, they have formed DMARC.org, a group of 15 companies that strive to promote a standard set of technologies, which they say will lead to more secure email.
PayPal, which has used the authentication technologies with Yahoo's email service since 2007 and Google's since 2008, is currently blocking around 200,000 fake emails per day. Google, meanwhile, is currently protecting 15 percent of the messages the company delivers to inboxes, according to Adam Dawes, a product manager at the search engine giant.
Michael Osterman, president of Osterman Research, which tracks the messaging industry, explained that the phishing problem is one the industry has been trying to resolve for years. Now, though, he said that there is a real chance that this ambition will finally be realised. "If you are a big bank or a retailer, you have a very strong interest in making sure people trust your messages," he told the Wall Street Journal.
However, Brett McDowell, chair of DMARC and a senior manager at PayPal, acknowledged that even if email can be authenticated, it won't bring about the end of email fraud. But it will mean that fraudsters will be forced to find new addresses before they are able to send more emails, he said.
It will not cost businesses an obscene amount of money to start using the standards, according to Mr. McDowell, though he explained that it will mean that they need to identify every server that sends email and also check that the technologies are in use.
With the working group having just launched, Mr. McDowell said that he hopes to see makers of security and email software adopt the DMARC software.

Three of the world's best-known email-service providers - Microsoft, Google and AOL - have backed plans designed to dramatically reduce phishing emails. Phishing is a way of attempting to acquire information such as usernames, passwords, and credit card details by masquerading as a trustworthy entity. The overarching ambition of the new working group formed by a number of leading companies is to stop the flow of phishing emails, which deceive recipients into believing they come from a credible source.


The firms, which also enjoy support from the likes of Bank of America and PayPal, hope to create a more secure environment, where computer users feel secure in the knowledge that none of their mail is a trick.


As a result, they have formed DMARC.org, a group of 15 companies that strive to promote a standard set of technologies, which they say will lead to more secure email.


PayPal, which has used the authentication technologies with Yahoo's email service since 2007 and Google's since 2008, is currently blocking around 200,000 fake emails per day. Google, meanwhile, is currently protecting 15 percent of the messages the company delivers to inboxes, according to Adam Dawes, a product manager at the search engine giant.


Michael Osterman, president of Osterman Research, which tracks the messaging industry, explained that the phishing problem is one the industry has been trying to resolve for years. Now, though, he said that there is a real chance that this ambition will finally be realised. "If you are a big bank or a retailer, you have a very strong interest in making sure people trust your messages," he told the Wall Street Journal.


However, Brett McDowell, chair of DMARC and a senior manager at PayPal, acknowledged that even if email can be authenticated, it won't bring about the end of email fraud. But it will mean that fraudsters will be forced to find new addresses before they are able to send more emails, he said.


It will not cost businesses an obscene amount of money to start using the standards, according to Mr. McDowell, though he explained that it will mean that they need to identify every server that sends email and also check that the technologies are in use.


With the working group having just launched, Mr. McDowell said that he hopes to see makers of security and email software adopt the DMARC software.

Filter Blog

By date:
By tag: