Skip navigation
1 2 3 Previous Next


152 posts


Over 10,000 GB can be stored in this tiny pink droplet! DNA storage a possibility? UW and Microsoft partnered up to create a method of accurately storing and recovering hard drive data into DNA snippets. Their latest trial was perfect at recovering data due to their new approach to encryption and decryption. (via University of Washington)


Wetware on the way?


Microsoft Research has currently decided to change the market for archival data storage by utilizing DNA to store millions of gigabytes of data in a single gram of DNA. However, in order to achieve this feat, which we recently posted about, they have teamed up with some researchers in the University of Washington; they shared their findings in a paper presented at ACM International Conference on Architectural Support for Programming Languages and Operating Systems.


Their paper elaborates on how Microsoft Research labs have been able to successfully store and retrieve data encoded in synthetic DNA with the help of a collaboration with University of Washington researchers. So far, this team is one of only two researchers to successfully encode and retrieve data stored in DNA with a one hundred percent success rate.


So, what’s the secret? The secret seems to lie in the encoding and decryption process. The process used to create and read the DNA is fairly simple. Once they have encoded a chunk of data into letters A, C, G, and T: the nucleotides which are the building blocks for DNA. They then outsource the creation of spinets of DNA strands which utilize their encoded sequence of letters.


To retrieve the data, they must sequence the DNA strands which are all together in the same test tube (seen above as a tiny speck of pink). So, of course, the decoding process is more involved than simply finding out the sequencing of the DNA within the test tube: you have to decode it. And here is where this team up of interdisciplinary scientists from Microsoft and University of Washington got it very right!


They put the magic into how they chose to encode the data from it’s original bits of zeros and ones into nucleotides A, C, G, and T. They knew that, if they could streamline their process, they would have little to no errors later in the decryption process. Essentially, they tried to make it as streamlined and simple as possible to avoid the errors that come with complexity. But how could they know where each snippet of DNA fell in the full sequence of the data? They encoded zip codes and street address equivalents into each snippet of DNA to correctly place each sequence into the bigger sequence for accurate decryption. A pretty clever and simple solution, right?


All in all, their novel approach to encryption and decryption paid off as they were able to restore all of the data from the DNA without any errors or data loss. The whole project is impressive, but this current method can only work for storing archival data that requires no alterations and no immediate access. While this can provide a good service to companies who have large data stores of information, I wonder how practical this really is. On the one hand, one drop of DNA can store about 10,000 GBs. On the other hand, what is our obsession with storing everything?!


This can also present a sort of breach of security as companies like Facebook will have a copy of all of your photos and your profile for eternity – long after you choose to delete your profile and cancel your account? Also, with the new compactness of DNA data storage will companies choose to keep archival data forever, rather than for 5-10 years when they run out of hard disk space? Where is the line drawn, and what are the rights of customers if their archival data (which could include SSN and bank information) is stored forever by a company that they no longer choose to actively do business with?


Have a story tip? Message me at:

Tibbo Project System (TPS) is a highly configurable, affordable, and innovative automation platform. It is ideal for home, building, warehouse, and production floor automation projects, as well as data collection, distributed control, industrial computing, and device connectivity applications.


Suppliers of traditional “control boxes” (embedded computers, PLCs, remote automation and I/O products, etc.) typically offer a wide variety of models differing in their I/O capabilities. Four serial ports and six relays. Two serial ports and eight relays. One serial port, four relays, and two sensor inputs. These lists go on and on, yet never seem to contain just the right mix of I/O functions you are looking for.


Rather than offering a large number of models, Tibbo Technology takes a different approach: Our Tibbo Project System (TPS) utilizes Tibbits® – miniature electronic blocks that implement specific I/O functions. Need three RS232 ports? Plug in exactly three RS232 Tibbits! Need two relays? Use a relay Tibbit. This module-based approach saves you money by allowing you to precisely define the features you want in your automation controller.

Here is a closer look at the process of building a custom Tibbo Project System.



Start with a Tibbo Project PCB (TPP)



A Tibbo Project PCB is the foundation of TPS devices.

Available in two sizes – medium and large – each board carries a CPU, memory, an Ethernet port, power input for +5V regulated power, and a number of sockets for Tibbit Modules and Connectors.


Add Tibbit® Blocks


Tibbits (as in “Tibbo Bits”) are blocks of prepackaged I/O functionality housed in brightly colored rectangular shells. Tibbits are subdivided into Modules and Connectors.

Want an ADC? There is a Tibbit Module for this. 24V power supply? Got that! RS232/422/485 port? We have this, and many other Modules, too.

Same goes for Tibbit Connectors. DB9 Tibbit? Check. Terminal block? Check. Infrared receiver/transmitter? Got it. Temperature, humidity, and pressure sensors? On the list of available Tibbits, too.



Assemble into a Tibbo Project Box (TPB)


Most projects require an enclosure. Designing one is a tough job. Making it beautiful is even tougher, and may also be prohibitively expensive. Finding or making the right housing is a perennial obstacle to completing low-volume and hobbyist projects.

Strangely, suppliers of popular platforms such as Arduino, Raspberry Pi, and BeagleBone do not bother with providing any enclosures, and available third-party offerings are primitive and flimsy.

Tibbo understands enclosure struggles and here is our solution: Your Tibbo Project System can optionally be ordered with a Tibbo Project Box (TPB) kit.

The ingenious feature of the TPB is that its top and bottom walls are formed by Tibbit Connectors. This eliminates a huge problem of any low-volume production operation – the necessity to drill holes and openings in an off-the-shelf enclosure.

The result is a neat, professionally looking housing every time, even for projects with the production quantity of one.

Like boards, our enclosures are available in two sizes – medium and large. Medium-size project boxes can be ordered in the LCD/keypad version, thus allowing you to design solutions incorporating a user interface.



Unique Online Configurator



To simplify the process of planning your TPS we have created an Online Configurator.

Configurator allows you to select the Tibbo Project Board (TPP), “insert” Tibbit Modules and Connectors into the board’s sockets, and specify additional options. These include choosing whether or not you wish to add a Tibbo Project Box (TPB) enclosure, LCD and keypad, DIN rail mounting kit, and so on. You can choose to have your system shipped fully assembled or as a parts kit.

Configurator makes sure you specify a valid system by watching out for errors. For example, it verifies that the total power consumption of your future TPS device does not exceed available power budget. Configurator also checks the placement of Tibbits, ensuring that there are no mistakes in their arrangement.

Completed configurations can be immediately ordered from our online store. You can opt to keep each configuration private, share it with other registered users, or make it public for everyone to see.



Develop your application

Like all programmable Tibbo hardware, Tibbo Project System devices are powered by Tibbo OS (TiOS).

Use our free Tibbo IDE (TIDE) software to create and debug sophisticated automation applications in Tibbo BASIC, Tibbo C, or a combination of the two languages.

To learn more about the Tibbo Project System please visit TPS parts, as well as complete systems can be ordered from our online store (


Scientists at Rice University discovered the force field surrounding a Tesla coil is strong enough to cause carbon nanotubes to self-assemble, a phenomenon that could be useful in regenerative medicine.


What if carbon nanotubes could self-assemble, and harness enough energy to illuminate LEDs without touch? Thanks to a new research study conducted by scientists at Rice University, now it is.




The process is called “Teslaphoresis” and is the manner by which carbon nanotubes self-assemble into long wires, organized by charge, due to the force field emitted by a Tesla coil. The phenomenon has only previously been observed at the nano level, in ultrashort distances. This new discovery holds promise for expanding the process to allow for new methodologies in science and energy research.


In the experiment, researchers observed the effects of a Tesla coil on carbon nanotubes. The scientists observed that the nanotubes not only self-assembled according to positive or negative charge, but also moved toward the coil over considerable distances. Rice chemist Paul Cherukuri led the research team and the project was entirely self-funded.




"Electric fields have been used to move small objects, but only over ultrashort distances," Cherukuri said. "With Teslaphoresis, we have the ability to massively scale up force fields to move matter remotely."


The research team plans to continue its work, and believes the phenomenon may have a future impact on the development of regenerative medical practices. The team plans to observe how nanotubes are affected by the presence of several Tesla coils at once.


The study findings were published in ACS Nano.



Have a story tip? Message me at:


I finished the project... quite a bit late. But Happy Easter, none the less!


See Part 1 and the design of the Chirping Easter Egg project here: [DIY Project] Build a Chirping Easter Egg - part 1


Have a story tip? Message me at:


Could quantum computing render encryption useless? Quantum computer is quickly becoming a reality as MIT and University of Innsbruck researchers have proven that a scalable computer can be created using 5 individual atoms. The possibilities of such an efficient and fast system would render encryptions, like RSA, useless. (via MIT)


MIT researchers have made the first real step of solving the big classical issues of factoring utilizing quantum computing. For a while now researchers have been trying to create and use quantum computers which utilize single atoms to generate zeros and ones, but it has been to hard to implement – especially when dealing with more than one atom.


MIT and University of Innsbruck researchers have come up with the first step in making a scalable quantum computing system that uses five atoms. The team was able to stabilize the atoms and know exactly where they are in space by ionizing each calcium atom (taking an electron from each) and trapping them within an electric field. Then, they are able to change the states of each individual atom by use of a laser to perform ‘logic gates’ which can act out algorithms.


The amazing thing about using atomic ions to perform algorithms is that they can be in a multitude of states simultaneously instead of just registering as zero or one to form each bit of data (used in traditional computers). Within a quantum computer, each atom can register as both zero and one simultaneously – making it possible to run two different calculations simultaneously. These different, atomic-scale units are called ‘qubits’. When an atom is performing parallel operations, lasers are used to create a ‘superposition,’ which makes qubits possible.


Within the new quantum computing system developed by Issac L. Chuang and his team, each atom can be in two different energy states at the same time (again, called a superposition). Lasers are used to entice superpositions for 4 of the 5 atoms within their computer and the 5th atom is used to store, forward, extract, and recycle the data.


All of this is basically a scientific way of saying that this latest innovation in quantum computing makes it easier to do way more with way less resources. In order to prove this point, this scientific team put their computer to the test by having it demonstrate factoring using Shor’s algorithm: the most efficient algorithm ever created to factor numbers. However, factoring becomes extremely time consuming and difficult – even for the best technology we have on hand. Hence, this new conceptual computer’s ability to successfully handle Shor’s algorithm with more success and ease than other models is a worthy proof of concept.


However, before you get too excited, know that they only factored the number 15 using their new quantum computer design and Shor’s algorithm. It was able to do so successfully 99% of the time, which is a great breakthrough in this particular field. It still may be a little while until this type of technology is scaled up to tackle bigger problems and become a stable in commercial and consumer computers alike.


For now, everyone is just ecstatic that the computer actually works and is using 5 single atoms to get the job done – something that seemed improbable before. The design is supposed to be scalable, so with enough funding future scientists can easily build a computer that uses 15, 20, or 100 individual atoms. For the future, the emergence of this technology means that encryption technology based upon factoring will become obsolete. Currently, factorial encryption is used to protect everything from banking information to national secrets. Hence, now would be the time to come up with a better solution to online security.


Have a story tip? Message me at:

Cabe easter egg trace small.jpg


I am smitten with the idea of the beeping Easter Egg for visually impaired kids - see this post for more.


Despite digging around, I couldn't find any designs or diagrams for their egg. So, I designed my own.


Originally I thought of using a Raspberry Pi Zero, but later realized it was over the top for what's necessary. … plenty can still be made without a microcontroller. This beeping Easter Egg uses the age old 555 timer. (For those who may attempt to make one too, the 10K resistor with the star around it sets the time between beeps.)


Above is the “schematic.”


UPDATE: (3/26/2016) Couldn't build the circuit... only 555s I had were burnt out. Radioshack doesn't carry components anymore. So sad...


UPDATE 2: The drawing above would place the beeps out a little awkward. Try changing both resistors to 10K. Based off this site - Astable 555 Square Wave Calculator


UPDATE 3: I finally built the project. My original 555 time was indeed broken. Swapped out, worked perfectly! See the build here: [DIY Project] Build a Chirping Easter Egg - part 2


A bomb squad in St. Charles hardwired Easter eggs to make a chirping sound so children with special needs would be able to participate in an egg hunt for the first time. (Care of Roberto Rodriguez of St. Louis Post-Dispatch)


I love this story.. and idea. What a great event for visually impaired kids/people. I find it very inspiring.


The St. Charles County bomb squad in Missouri recently used their tactical skills to tackle a new challenge. The team used its electronics background to make chirping Easter eggs that enabled visually impaired children, children with autism, and children with mobility challenges to participate in an Easter egg hunt for the first time.


Although Easter has its roots in the biblical story, many adults today celebrate the day with tons of sweets and candy. In fact, a recent survey revealed Americans spend more on candy for Easter than Halloween. Americans are projected to spend $2.4 billion this year on Easter alone, but children with disabilities are rarely able to participate in the fun. The St. Charles County bomb squad wanted to change that.


Corporal Steve Case is the bomb squad commander. In a recent interview with NPR he revealed he has an 18-year-old son with autism, and the drive to create the event for special needs kids was a personal one. The team realized that the challenge for kids with disabilities lies in their inability to see the eggs, or to more easily discern what they’re looking for. The team thought if it could make the eggs chirp, the kids could have a shot at finding the eggs; and it worked.


squad easter.jpg

The squad making the chirping eggs. I wish they shared their design.... (via Fox2News)


The team essentially hid beepers inside of plastic Easter egg shells. The eggs chirped continuously until the children found an egg, and their electronic one was swapped for one filled with candy or toys. The eggs had a fairly simple design, with a rigged on/off switch along the side and a battery stashed away in the interior. Case said while steady hands and an understanding of electronics comes with the territory of bomb deactivation, making the eggs function was still challenging for the team.


Still, Case would agree the payoff was well worth the effort. This year’s hunt was one of the first Case had the chance to witness. He told NPR he knows what it’s like to be excluded from events due to a child’s disability. He hopes the initiative becomes an annual one.


The team ran several egg hunts for children with different kinds of disabilities – vision impairment, mobility challenges, and autism. One parent told the St. Louis Post-Dispatch her son’s face lit up when he found a chirping egg – perhaps one of the first times he’s been able to participate in a community egg hunt due to autism. For Case, that makes it all worthwhile.


Have a story tip? Message me at:




A sample of what Operator looks like. Created by Hoefler & Co this font focuses on tricky punctuation (via Typography)


Typefaces affect how we see things. There's the standard Times New Roman that you can't go wrong with or the dreaded Comic Sans, which is met with derision. Not only is font an important element is reading and typing, it's also important when it comes to coding. Operator Mono, founded by Jonathan Hoefler, is a new font that's supposed to make life easier for programmers.


Hoefler got the idea from Monospace or fixed-width typefaces, which are closely related to vintage typwriters. He wanted a similar font to use in programming with some fine tuned adjustments. “In developing Operator,” says Hoefler “we found ourselves talking about JavaScript and CSS, looking for vinyl label embossers on eBay, and renting a cantankerous old machine from perhaps the last typewriter repair shop in New York and unearthing a flea market find that amazingly dates to 1893”


Operator pays special attention to things like brackets, braces, and punctuation marks, which often make or break a code. The font is also supposed to make it easier to identify the difference between I, l, and 1 or colons and semi-colons by using color and italics making them easier to spot in endless code. The font comes in two varieties: Operator which is natural width and Operator Mono which is fixed width. Both are available in nine different weight from thin to ultra and includes both roman and italic small caps throughout. Both types are supported by companion ScreenSmart fonts, which are designed for user in browsers at test sizes.


Those interested in the font can purchase it starting at $200 from Hoefler & Co. It's a hefty price to pay to make programming easier, especially when there are a number of alternatives out there. A quick Google search will bring up the best fonts to use for programming. They range from Consolas to Monaco. Sites like Slant will even show the pros and cons of each type of font along with where you can get it. Many of the fonts are inexpensive, some are even free.


Operator has good intent behind it, but people who have been programming for years may not want to pay that much to have color and italics added to their typeface. Seasoned programmers know the errors and trip ups they have to keep an eye out for, so maybe this new font won't appeal to them. But for those who are new to the field and have extra money to burn may want to look into this new typeface.


Have a story tip? Message me at:

16_23 UN_UDoHR.png_SIA_JPG_fit_to_width_INLINE.jpg

Researchers at the University of Southampton have developed a way to record and retrieve data on the fifth dimension. The process involves using light to read information on nanostructured glass. The data files can last billions of years and are being used to store the most influential documents of our civilization to preserve our memory long after we are gone. (via U of Southampton)


Our civilization is obsessed with understanding and uncovering the past. Much of what we know about past civilization, however, has been pieced together by education assumptions and preserved artifacts. But what if we had a way to preserve the most important beliefs and documents of the era to ensure the civilizations to follow can continue to progress mankind, and learn from our mistakes? Well now, they can.


Researchers from the University of Southampton’s Optoelectronics Research Centre have spent the past few years perfecting data storage in the fifth dimension. The new technology can store 360TB of information, withstand temperatures of 190 degrees Celsius for 13.8 billion years, and are considered to be very stable overall. The portable discs of memory are being used to store huge archives of data, including the King James Bible, Magna Carta, Newton’s Opticks, and the Universal Declaration of Human Rights – and that’s just the beginning.


5D data storage.jpg_SIA_JPG_fit_to_width_INLINE.jpg


Aussie researchers base the technology on nanostructured glass, or fused quartz. The glass is encoded with femtosecond laser writing, which results in three small layers of dots separated by five micrometres. When light is shined through the small, circular storage files, the polarization of the light is modified, and the data can be read. The writing, however, must be read through an optical microscope and polarizer.


The researchers compare the innovation to Superman’s memory crystals. They say the files are five dimensional because of the 3D position of the nanostructured quartz itself, in additional to the nano size and orientation of the technology overall. The technology was demonstrated successfully at the UNESCO International Year of Light ceremony in Mexico.


ORC Professor Peter Kazansky said the innovation is thrilling in its ability to preserve the monuments of our civilization; that what we learned will be remembered. The technology has the capability to record entire libraries, and there’s no telling what information the researchers will transform into the timeless files.


The researchers will presented their findings at The International Society for Optical Engineering 2016 Conference in San Francisco, CA, last week. The researchers hope to commercialize their innovation, and are seeking industry partners to make this possible. 



Have a story tip? Message me at:




Researchers at Switzerland’s ETH Zurich successfully made the world’s smallest optical network switch. At one-atom in size, it may revolutionize network infrastructure in only a few years’ time. (via ETH)


In order to keep up with the increasing rate of data transmission, a team of Swiss researchers at ETH Zurich recently developed the world’s smallest optical network switch. It measures on the atomic scale and is actually smaller than the wavelength of light needed to operate it. The research may revolutionize data transmission in only a few years’ time by allowing for the development of the most powerful network infrastructure to date.


According to a paper published by the research team, data transmission on mobile and wire-based platforms continues to soar at incredible rates – 23% and 57% respectively each year. Current operational network switches vary from a few centimeters to a few inches in width, and if rates of data transmission continue to rise, network infrastructure must become physically expansive to keep up. For that reason, researchers at Switzerland’s ETH Zurich tried to make at optical network switch that could make for a more powerful, yet smaller, machine.




ETH Professor of Photonics and Communications Jürg Leuthold led the research team, and Senior Scientist Alexandros Emboras was largely responsible for the design that made the successful development of the switch possible. Emboras discovered that by placing a silicon membrane between a small pad made of silver, and another small pad made of platinum, he could manipulate atoms with wavelengths of light at low frequencies.


The modulator functions by keeping enough space – a few nanometers – between the small pads, and feeding wavelengths of light from an optical fiber through the small crevice. The light acts as a surface plasmon, which enables the transfer of energy to individual atoms on the metallic surfaces. These atoms begin moving at the speed of the light itself, and if the atoms enter the space between the two metallic pads, a short circuit is created through which data may be transmitted.




By controlling the flow of light through the optical fiber, Emboras was able to control the atoms, which acted as an on or off switch to the optical network circuit. By monitoring the activity on a highly specialized computer, team member and ETH Professor Mathieu Luisier was able to confirm the switch was activated by a single atom, making it both the smallest ever optical network switch, and the smallest possible switch at a single atom.


The discovery is revolutionary for a number of reasons. Its size allows for the development of smaller, more powerful network infrastructure that can sustain the rapid growth of data transmission. With this, it also provides a truly digital signal (a one or a zero), allowing the switch to also act as a transistor. It is a significant accomplishment for the information sciences.


Unfortunately, the switch is not ready for commercialization yet. Currently, it only exhibits a 17% success rate, and is only able to transmit data at megahertz frequencies. Researchers plan to continue their efforts and expect to present a practical, potentially marketable solution within the next few years.


Have a story tip? Message me at:

I have been working on a new interface board for the Jetson TK1 embedded supercomputer called the Jetduino. It makes it very easy to build robots that can use the parallel processing capabilites of the NVIDIA GPU for vision and neural neural networks.



The Jetduino mounts above the Jetson and has a small connector that fits in the 2mm J3A connectors. A Raspberry Pi GPIO ribbon cable then connects it to the Jetduino. It has mounting points for a 2.5" HD, wireless antennas, a large proto-typing area, and a built-in shield for an Arduino Due. Just like the GrovePi for the Rapsberry Pi, you can use Python and C libraries to talk directly to the Arduino to set and receive digital and analog data. You can also control regular and smart servos. It has numerous Grove and RobotGeek connectors for modular sensors and motor actuators. I just put out a new blog post and two YouTube videos showing how to setup the Jetduino and perform digital I/O with 12 of the Jetson TK1 GPIO lines, and the 54 lines available on the Due. I am working on videos to show off the other features as well. If you would like to be notified when these are available please sign up for my newsletter. I plan to launch a crowdfunding campaign to get the Jetduino produced sometime in March or April, and I will need the help of any makers out there who want to make it easier to build robots or electronic projects with the awesome Jetson TK1.


Blog Post with YouTube videos:



Here are a couple of links describing what the Jetduino is.

Jetduino V1 description:

Jetduino V1 test results:




Researchers of CU-Boulder, MIT and UC Berkeley have successfully built a photonic microchip that uses light to transmit data. It has a bandwidth of 300 gigabits across a minute 3x6mm area and is the first of its kind. It may revolutionize data transmission forever. (via University of Colorado Boulder)

While Intel’s new computer processing chips have gained a reputation for packing unprecedented power and speed, researchers at The University of Colorado Boulder are reinventing how we execute data transmission. In collaboration with researchers from MIT and UC Berkeley, the team has successfully transmitted data using light instead of electricity.


Relying on light for data transmission is genius. The technology can send information over a larger distance using the same amount of energy electrical units take, which means standard microchips will require even less energy than they already do. With this, photonic technology has another significant advantage – multiple streams of data can be transmitted at once across different electromagnetic spectrums, i.e., colors of light, on the same fibers currently used to transmit data electronically. Basing microchip technology on photons, while recycling existing hardware, will thus revolutionize data transmission, by transmitting data faster and more energy-efficiently than any technology currently available.


The technology is based on infrared light, with a wavelength one-hundredth the thickness of a human hair, and shorter than one micron, Miloš Popović, an assistant professor in CU-Boulder’s Department of Electrical, Computer, and Energy Engineering, co-corresponded the study, told reporters at CU-Boulder. One a single microchip, the researchers successfully developed a functional photonic chip with a bandwidth density of 300 gigabits per second per square millimeter. This is up to 50 times greater bandwidth than anything currently available on the market.


The researchers successfully built a functional photonic microchip that mimics electricity-only design. The chip is 3 by 6mm and utilizes the same electronic circuitry of existing models. Its light-based transmission technology, however, allows it to have 850 optical I/O components, and the design can be mass-produced by existing manufacturing processes fairly smoothly. It is the only chip of its kind – the only processor in the world to transmit data using light.


The researchers are confident in the technology’s contribution to modern computing. Mark Wade, a CU-Boulder PhD candidate and co-lead author of the study, said the design solves the computing communication bottleneck of electric-only systems, while remaining streamline enough to be mass-produced. The research team has plans to sell the technology, and a start-up was created to do just that. Ayar Labs (formerly ObtiBit) will continue to operate independently, specializing in high-volume data transmission using energy-efficient technology. The start up also won the MIT Clean Energy Prize just last year.


We live in the age of information. With current computing speeds already nearing the physical limitations of electricity-based technology, our societal advancements are limited by our computing speed. According to John E. Howland of Trinity University, meteorologists are limited by slower computing speeds. Faster processing will have a direct impact on the natural sciences, and our ability to understand the world around us. Beyond faster gaming and data retrieval than we ever thought possible, artificial intelligence and science will advance beyond our wildest imaginations when faster processing speeds are possible. And now they are.


According to study researchers, manufacturers have begun streamlining processes to mass-produce photonic technology. It won’t be long before we see the direct benefits of what a limitless society can accomplish together. Rajeev Ram, a professor of electrical engineering at MIT, led the research team. The details of the study were published in the journal Nature.

See more news at:


Ordinary roses or a living, renewable biofuel source? Possible both? A group of Swedish scientists have made an epic breakthrough by successful incorporating functioning circuitry into a living organism (in this case, a common rose). They recently released their discoveries which successfully caused ions within the rose’s leaves to light up. The next step is using electronic-organic plants to acts as biofuel power plants. (image via Panoramic Images)


It seems that technology has triumphed over nature once more – taking something once sublime and beautiful and turning it into a cold, calculated machine. Never before were scientists able to successfully combine organic plant matter and electronic circuits without killing the plant. Now, a Swedish group of researchers from of Linköping University have released their chilling findings in Science Advances. Their project started in 2012, after many unsuccessful attempts. It seems that this time they are on the right track with a breakthrough that may change our relationships with plants and the whole natural world forever.


It starts with a rose: a beautiful and temperamental plant whose only function is to look beautiful. However, why simply enjoy a thing of beauty when you can turn it into an instrument, perhaps the rose can serve as a radio transmitter, or renewable energy source instead of just sitting there; or at least that is what many scientists may think. The issue with combining plants and electronics was that scientists were trying to splice them together somehow, or combine the inorganic with the organic by inorganic means.


rose tech.png

A schematic of how their new technology works from their journal article (via Berggren et al., 2015, Science Advances, Vol. 1, no. 10)


The genius of Magnus Breggren and his team from Sweden is that they have discovered how to use the natural functions of the plant and its components to create electronic circuitry. They have used a synthetic polymer which they feed to the plant the way plants feed on water for nutrients. As the polymer makes its way up the vascular system of the rose stem, it becomes a part of the xylem, the leaves, the veins, and the signals of the rose. These components of the plant are then used as the main components of the circuitry which allow electronics and organic bodies to merge and act as one.


Their current synthetic polymer mixture creates a wire that’s up to 10 cm long inside of the stem (xylem) without impeding the rose’s ability to absorb water and nutrients. Via this method, the group of scientists was able to light ions within the leaves of the plants. Berggren was so surprised that their experiments have actually worked that he can’t wait to test out new projects: among them is a biofuel concept. Berggren told Motherboard, “Right now we are trying to put electrodes into the leaves with enzymes that we connect to the electrodes,” he said. “The sugar that is produced in the leaves is converted by the enzyme; they deliver a charge to the electrode and then hopefully we can collect that charge in a biofuel cell.”


This latest proposition could entirely change our relationship with plants, as forests could turn into renewable power plants for nearby cities. Berggren hopes that this new biofuel possibility will allow us to gain resources from our natural world without destroying it. However, how viable is the health of the rose in the long term? No one knows. It is still very early days, but there is no doubt that science is about to get weirder as electronics and plants can truly begin to meld into a cyborg technology for years to come.

See more news at:


This chip is a huge step forward in fiber optic communications. University of Colorado researchers combined electrons and photons within a single chip for this landmark development. (all images via University of Colorado & Glenn Asakawa)


Here is a claim and a wish I've heard for decades.

Advances in technology never cease to amaze no matter how big or small, but the University of Colorado takes the cake for best innovation of 2015. The university's researchers have created the first full-fledged processor that transmits data using light instead of electricity. This was done by successfully combining electrons and photons within a single microprocessor. So what does this all mean? It's a big development that could lead to ultrafast, low power data crunching. It also marks a major step for fiber optic communication.


To get the successful outcome, researchers put two processor cores with more than 70 million transistors and 850 photonic components on a chip. They were then able to create the processor in a foundry that produces high performance computer chips on a mass scale. This means the design can be easily and quickly made up for commercial production. Though the design isn't completely photonic the processor is still pretty impressive with an output of 300Gbps per square millimeter – 10 to 50 times the normal speed.


(Left) "The light-enabled microprocessor, a 3 x 6 millimeter chip, installed on a circuit board." (Right) "Electrical signals are encoded on light waves in this optical transmitter consisting of a spoked ring modulator, monitoring photodiode (left) and light access port (bottom)"


Fiber optic communication is a big goal for many researchers and organizations due to its many advances. It supports greater bandwidth , carries data at higher speeds over larger distances, and uses less energy in general, which is good news for a society that aims to consume less power. There have been some advances in fiber optic technology, but up until now it has proven difficult to merge photonics and computer chips together. Now, these University of Colorado researchers have jumped over that hurdle.


But does the chip actually work? Researchers ran several test and showed that the chip was able to run various computer programs that required it to send and receive instructions and data from memory. This is how they were able to discover the chip had a bandwidth density of 300 Gbps.


“The advantage with optical is that with the same amount of power, you can go a few centimeters, a few meters or a few kilometers," said study co-lead author Chen Sun. "For high-speed electrical links, 1 meter is about the limit before you need repeaters to regenerate the electrical signal, and that quickly increases the amount of power needed. For an electrical signal to travel 1 kilometer, you'd need thousands of picojoules for each bit.”


If there's further advances in the technology not only will it mean posting Facebook statues as lightening fast speed, it also means data centers will be more green. According to the Natural Resources Defense Council, data centers used an estimated 91 billion kilowatt-hours of electricity in 2013, which is around 2 percent of electricity consumed in the United States. Considering those numbers, this is a great way to promote a greener society.


See more news at:

The size of LED board circuit

The LED board circuit requires more and more small size.
Now most of electronic products have become smart and small.
In LED lighting industry also have the same require, result of PCB board manufacturers have to produce small size PCBs.

The temperature of LED board circuit

The head dissipation problem due to small size LED board circuit.
Small LED lighting not only require smaller size, but also requires better heat dissipation performance.
Because of the trend towards miniaturization, thermal output per surface unit is increasing, which means that ever increasing heat is emitted onto an ever smaller surface area for dissipation.



Filter Blog

By date: By tag: