Skip navigation
1 2 3 Previous Next

Embedded

44 Posts authored by: Cabe Atwell

DNA-storage-close-up.jpg

Over 10,000 GB can be stored in this tiny pink droplet! DNA storage a possibility? UW and Microsoft partnered up to create a method of accurately storing and recovering hard drive data into DNA snippets. Their latest trial was perfect at recovering data due to their new approach to encryption and decryption. (via University of Washington)

 

Wetware on the way?

 

Microsoft Research has currently decided to change the market for archival data storage by utilizing DNA to store millions of gigabytes of data in a single gram of DNA. However, in order to achieve this feat, which we recently posted about, they have teamed up with some researchers in the University of Washington; they shared their findings in a paper presented at ACM International Conference on Architectural Support for Programming Languages and Operating Systems.

 

Their paper elaborates on how Microsoft Research labs have been able to successfully store and retrieve data encoded in synthetic DNA with the help of a collaboration with University of Washington researchers. So far, this team is one of only two researchers to successfully encode and retrieve data stored in DNA with a one hundred percent success rate.

 

So, what’s the secret? The secret seems to lie in the encoding and decryption process. The process used to create and read the DNA is fairly simple. Once they have encoded a chunk of data into letters A, C, G, and T: the nucleotides which are the building blocks for DNA. They then outsource the creation of spinets of DNA strands which utilize their encoded sequence of letters.

 

To retrieve the data, they must sequence the DNA strands which are all together in the same test tube (seen above as a tiny speck of pink). So, of course, the decoding process is more involved than simply finding out the sequencing of the DNA within the test tube: you have to decode it. And here is where this team up of interdisciplinary scientists from Microsoft and University of Washington got it very right!

 

They put the magic into how they chose to encode the data from it’s original bits of zeros and ones into nucleotides A, C, G, and T. They knew that, if they could streamline their process, they would have little to no errors later in the decryption process. Essentially, they tried to make it as streamlined and simple as possible to avoid the errors that come with complexity. But how could they know where each snippet of DNA fell in the full sequence of the data? They encoded zip codes and street address equivalents into each snippet of DNA to correctly place each sequence into the bigger sequence for accurate decryption. A pretty clever and simple solution, right?

 

All in all, their novel approach to encryption and decryption paid off as they were able to restore all of the data from the DNA without any errors or data loss. The whole project is impressive, but this current method can only work for storing archival data that requires no alterations and no immediate access. While this can provide a good service to companies who have large data stores of information, I wonder how practical this really is. On the one hand, one drop of DNA can store about 10,000 GBs. On the other hand, what is our obsession with storing everything?!

 

This can also present a sort of breach of security as companies like Facebook will have a copy of all of your photos and your profile for eternity – long after you choose to delete your profile and cancel your account? Also, with the new compactness of DNA data storage will companies choose to keep archival data forever, rather than for 5-10 years when they run out of hard disk space? Where is the line drawn, and what are the rights of customers if their archival data (which could include SSN and bank information) is stored forever by a company that they no longer choose to actively do business with?

 

Have a story tip? Message me at:

http://twitter.com/Cabe_Atwell

0425_TESLA-3-rn-1r4lnpb.jpg

Scientists at Rice University discovered the force field surrounding a Tesla coil is strong enough to cause carbon nanotubes to self-assemble, a phenomenon that could be useful in regenerative medicine.

 

What if carbon nanotubes could self-assemble, and harness enough energy to illuminate LEDs without touch? Thanks to a new research study conducted by scientists at Rice University, now it is.

 

0425_TESLA-2-rn-165leqj.jpg

 

The process is called “Teslaphoresis” and is the manner by which carbon nanotubes self-assemble into long wires, organized by charge, due to the force field emitted by a Tesla coil. The phenomenon has only previously been observed at the nano level, in ultrashort distances. This new discovery holds promise for expanding the process to allow for new methodologies in science and energy research.

 

In the experiment, researchers observed the effects of a Tesla coil on carbon nanotubes. The scientists observed that the nanotubes not only self-assembled according to positive or negative charge, but also moved toward the coil over considerable distances. Rice chemist Paul Cherukuri led the research team and the project was entirely self-funded.

 

0425_TESLA-1-rn-29eew8m-310x206.jpg

 

"Electric fields have been used to move small objects, but only over ultrashort distances," Cherukuri said. "With Teslaphoresis, we have the ability to massively scale up force fields to move matter remotely."

 

The research team plans to continue its work, and believes the phenomenon may have a future impact on the development of regenerative medical practices. The team plans to observe how nanotubes are affected by the presence of several Tesla coils at once.

 

The study findings were published in ACS Nano.

 

 

Have a story tip? Message me at:

http://twitter.com/Cabe_Atwell

 

I finished the project... quite a bit late. But Happy Easter, none the less!

 

See Part 1 and the design of the Chirping Easter Egg project here: [DIY Project] Build a Chirping Easter Egg - part 1

 

Have a story tip? Message me at:

http://twitter.com/Cabe_Atwell

mitquantum.jpg

Could quantum computing render encryption useless? Quantum computer is quickly becoming a reality as MIT and University of Innsbruck researchers have proven that a scalable computer can be created using 5 individual atoms. The possibilities of such an efficient and fast system would render encryptions, like RSA, useless. (via MIT)

 

MIT researchers have made the first real step of solving the big classical issues of factoring utilizing quantum computing. For a while now researchers have been trying to create and use quantum computers which utilize single atoms to generate zeros and ones, but it has been to hard to implement – especially when dealing with more than one atom.

 

MIT and University of Innsbruck researchers have come up with the first step in making a scalable quantum computing system that uses five atoms. The team was able to stabilize the atoms and know exactly where they are in space by ionizing each calcium atom (taking an electron from each) and trapping them within an electric field. Then, they are able to change the states of each individual atom by use of a laser to perform ‘logic gates’ which can act out algorithms.

 

The amazing thing about using atomic ions to perform algorithms is that they can be in a multitude of states simultaneously instead of just registering as zero or one to form each bit of data (used in traditional computers). Within a quantum computer, each atom can register as both zero and one simultaneously – making it possible to run two different calculations simultaneously. These different, atomic-scale units are called ‘qubits’. When an atom is performing parallel operations, lasers are used to create a ‘superposition,’ which makes qubits possible.

 

Within the new quantum computing system developed by Issac L. Chuang and his team, each atom can be in two different energy states at the same time (again, called a superposition). Lasers are used to entice superpositions for 4 of the 5 atoms within their computer and the 5th atom is used to store, forward, extract, and recycle the data.

 

All of this is basically a scientific way of saying that this latest innovation in quantum computing makes it easier to do way more with way less resources. In order to prove this point, this scientific team put their computer to the test by having it demonstrate factoring using Shor’s algorithm: the most efficient algorithm ever created to factor numbers. However, factoring becomes extremely time consuming and difficult – even for the best technology we have on hand. Hence, this new conceptual computer’s ability to successfully handle Shor’s algorithm with more success and ease than other models is a worthy proof of concept.

 

However, before you get too excited, know that they only factored the number 15 using their new quantum computer design and Shor’s algorithm. It was able to do so successfully 99% of the time, which is a great breakthrough in this particular field. It still may be a little while until this type of technology is scaled up to tackle bigger problems and become a stable in commercial and consumer computers alike.

 

For now, everyone is just ecstatic that the computer actually works and is using 5 single atoms to get the job done – something that seemed improbable before. The design is supposed to be scalable, so with enough funding future scientists can easily build a computer that uses 15, 20, or 100 individual atoms. For the future, the emergence of this technology means that encryption technology based upon factoring will become obsolete. Currently, factorial encryption is used to protect everything from banking information to national secrets. Hence, now would be the time to come up with a better solution to online security.

 

Have a story tip? Message me at:

http://twitter.com/Cabe_Atwell

Cabe easter egg trace small.jpg

 

I am smitten with the idea of the beeping Easter Egg for visually impaired kids - see this post for more.

 

Despite digging around, I couldn't find any designs or diagrams for their egg. So, I designed my own.

 

Originally I thought of using a Raspberry Pi Zero, but later realized it was over the top for what's necessary. … plenty can still be made without a microcontroller. This beeping Easter Egg uses the age old 555 timer. (For those who may attempt to make one too, the 10K resistor with the star around it sets the time between beeps.)

 

Above is the “schematic.”

 

UPDATE: (3/26/2016) Couldn't build the circuit... only 555s I had were burnt out. Radioshack doesn't carry components anymore. So sad...

 

UPDATE 2: The drawing above would place the beeps out a little awkward. Try changing both resistors to 10K. Based off this site - Astable 555 Square Wave Calculator

 

UPDATE 3: I finally built the project. My original 555 time was indeed broken. Swapped out, worked perfectly! See the build here: [DIY Project] Build a Chirping Easter Egg - part 2

 

http://twitter.com/Cabe_Atwell

56edd6085974f.image.jpg

A bomb squad in St. Charles hardwired Easter eggs to make a chirping sound so children with special needs would be able to participate in an egg hunt for the first time. (Care of Roberto Rodriguez of St. Louis Post-Dispatch)

 

I love this story.. and idea. What a great event for visually impaired kids/people. I find it very inspiring.

 

The St. Charles County bomb squad in Missouri recently used their tactical skills to tackle a new challenge. The team used its electronics background to make chirping Easter eggs that enabled visually impaired children, children with autism, and children with mobility challenges to participate in an Easter egg hunt for the first time.

 

Although Easter has its roots in the biblical story, many adults today celebrate the day with tons of sweets and candy. In fact, a recent survey revealed Americans spend more on candy for Easter than Halloween. Americans are projected to spend $2.4 billion this year on Easter alone, but children with disabilities are rarely able to participate in the fun. The St. Charles County bomb squad wanted to change that.

 

Corporal Steve Case is the bomb squad commander. In a recent interview with NPR he revealed he has an 18-year-old son with autism, and the drive to create the event for special needs kids was a personal one. The team realized that the challenge for kids with disabilities lies in their inability to see the eggs, or to more easily discern what they’re looking for. The team thought if it could make the eggs chirp, the kids could have a shot at finding the eggs; and it worked.

 

squad easter.jpg

The squad making the chirping eggs. I wish they shared their design.... (via Fox2News)

 

The team essentially hid beepers inside of plastic Easter egg shells. The eggs chirped continuously until the children found an egg, and their electronic one was swapped for one filled with candy or toys. The eggs had a fairly simple design, with a rigged on/off switch along the side and a battery stashed away in the interior. Case said while steady hands and an understanding of electronics comes with the territory of bomb deactivation, making the eggs function was still challenging for the team.

 

Still, Case would agree the payoff was well worth the effort. This year’s hunt was one of the first Case had the chance to witness. He told NPR he knows what it’s like to be excluded from events due to a child’s disability. He hopes the initiative becomes an annual one.

 

The team ran several egg hunts for children with different kinds of disabilities – vision impairment, mobility challenges, and autism. One parent told the St. Louis Post-Dispatch her son’s face lit up when he found a chirping egg – perhaps one of the first times he’s been able to participate in a community egg hunt due to autism. For Case, that makes it all worthwhile.

 

Have a story tip? Message me at:

http://twitter.com/Cabe_Atwell

 

 

operator_ide2.png

A sample of what Operator looks like. Created by Hoefler & Co this font focuses on tricky punctuation (via Typography)

 

Typefaces affect how we see things. There's the standard Times New Roman that you can't go wrong with or the dreaded Comic Sans, which is met with derision. Not only is font an important element is reading and typing, it's also important when it comes to coding. Operator Mono, founded by Jonathan Hoefler, is a new font that's supposed to make life easier for programmers.

 

Hoefler got the idea from Monospace or fixed-width typefaces, which are closely related to vintage typwriters. He wanted a similar font to use in programming with some fine tuned adjustments. “In developing Operator,” says Hoefler “we found ourselves talking about JavaScript and CSS, looking for vinyl label embossers on eBay, and renting a cantankerous old machine from perhaps the last typewriter repair shop in New York and unearthing a flea market find that amazingly dates to 1893”

 

Operator pays special attention to things like brackets, braces, and punctuation marks, which often make or break a code. The font is also supposed to make it easier to identify the difference between I, l, and 1 or colons and semi-colons by using color and italics making them easier to spot in endless code. The font comes in two varieties: Operator which is natural width and Operator Mono which is fixed width. Both are available in nine different weight from thin to ultra and includes both roman and italic small caps throughout. Both types are supported by companion ScreenSmart fonts, which are designed for user in browsers at test sizes.

 

Those interested in the font can purchase it starting at $200 from Hoefler & Co. It's a hefty price to pay to make programming easier, especially when there are a number of alternatives out there. A quick Google search will bring up the best fonts to use for programming. They range from Consolas to Monaco. Sites like Slant will even show the pros and cons of each type of font along with where you can get it. Many of the fonts are inexpensive, some are even free.

 

Operator has good intent behind it, but people who have been programming for years may not want to pay that much to have color and italics added to their typeface. Seasoned programmers know the errors and trip ups they have to keep an eye out for, so maybe this new font won't appeal to them. But for those who are new to the field and have extra money to burn may want to look into this new typeface.

 

Have a story tip? Message me at:

http://twitter.com/Cabe_Atwell

16_23 UN_UDoHR.png_SIA_JPG_fit_to_width_INLINE.jpg

Researchers at the University of Southampton have developed a way to record and retrieve data on the fifth dimension. The process involves using light to read information on nanostructured glass. The data files can last billions of years and are being used to store the most influential documents of our civilization to preserve our memory long after we are gone. (via U of Southampton)

 

Our civilization is obsessed with understanding and uncovering the past. Much of what we know about past civilization, however, has been pieced together by education assumptions and preserved artifacts. But what if we had a way to preserve the most important beliefs and documents of the era to ensure the civilizations to follow can continue to progress mankind, and learn from our mistakes? Well now, they can.

 

Researchers from the University of Southampton’s Optoelectronics Research Centre have spent the past few years perfecting data storage in the fifth dimension. The new technology can store 360TB of information, withstand temperatures of 190 degrees Celsius for 13.8 billion years, and are considered to be very stable overall. The portable discs of memory are being used to store huge archives of data, including the King James Bible, Magna Carta, Newton’s Opticks, and the Universal Declaration of Human Rights – and that’s just the beginning.

 

5D data storage.jpg_SIA_JPG_fit_to_width_INLINE.jpg

 

Aussie researchers base the technology on nanostructured glass, or fused quartz. The glass is encoded with femtosecond laser writing, which results in three small layers of dots separated by five micrometres. When light is shined through the small, circular storage files, the polarization of the light is modified, and the data can be read. The writing, however, must be read through an optical microscope and polarizer.

 

The researchers compare the innovation to Superman’s memory crystals. They say the files are five dimensional because of the 3D position of the nanostructured quartz itself, in additional to the nano size and orientation of the technology overall. The technology was demonstrated successfully at the UNESCO International Year of Light ceremony in Mexico.

 

ORC Professor Peter Kazansky said the innovation is thrilling in its ability to preserve the monuments of our civilization; that what we learned will be remembered. The technology has the capability to record entire libraries, and there’s no telling what information the researchers will transform into the timeless files.

 

The researchers will presented their findings at The International Society for Optical Engineering 2016 Conference in San Francisco, CA, last week. The researchers hope to commercialize their innovation, and are seeking industry partners to make this possible. 

 

 

Have a story tip? Message me at:

http://twitter.com/Cabe_Atwell

 

 

eth1.png

Researchers at Switzerland’s ETH Zurich successfully made the world’s smallest optical network switch. At one-atom in size, it may revolutionize network infrastructure in only a few years’ time. (via ETH)

 

In order to keep up with the increasing rate of data transmission, a team of Swiss researchers at ETH Zurich recently developed the world’s smallest optical network switch. It measures on the atomic scale and is actually smaller than the wavelength of light needed to operate it. The research may revolutionize data transmission in only a few years’ time by allowing for the development of the most powerful network infrastructure to date.

 

According to a paper published by the research team, data transmission on mobile and wire-based platforms continues to soar at incredible rates – 23% and 57% respectively each year. Current operational network switches vary from a few centimeters to a few inches in width, and if rates of data transmission continue to rise, network infrastructure must become physically expansive to keep up. For that reason, researchers at Switzerland’s ETH Zurich tried to make at optical network switch that could make for a more powerful, yet smaller, machine.

 

eth3.png

 

ETH Professor of Photonics and Communications Jürg Leuthold led the research team, and Senior Scientist Alexandros Emboras was largely responsible for the design that made the successful development of the switch possible. Emboras discovered that by placing a silicon membrane between a small pad made of silver, and another small pad made of platinum, he could manipulate atoms with wavelengths of light at low frequencies.

 

The modulator functions by keeping enough space – a few nanometers – between the small pads, and feeding wavelengths of light from an optical fiber through the small crevice. The light acts as a surface plasmon, which enables the transfer of energy to individual atoms on the metallic surfaces. These atoms begin moving at the speed of the light itself, and if the atoms enter the space between the two metallic pads, a short circuit is created through which data may be transmitted.

 

eth2.png

 

By controlling the flow of light through the optical fiber, Emboras was able to control the atoms, which acted as an on or off switch to the optical network circuit. By monitoring the activity on a highly specialized computer, team member and ETH Professor Mathieu Luisier was able to confirm the switch was activated by a single atom, making it both the smallest ever optical network switch, and the smallest possible switch at a single atom.

 

The discovery is revolutionary for a number of reasons. Its size allows for the development of smaller, more powerful network infrastructure that can sustain the rapid growth of data transmission. With this, it also provides a truly digital signal (a one or a zero), allowing the switch to also act as a transistor. It is a significant accomplishment for the information sciences.

 

Unfortunately, the switch is not ready for commercialization yet. Currently, it only exhibits a 17% success rate, and is only able to transmit data at megahertz frequencies. Researchers plan to continue their efforts and expect to present a practical, potentially marketable solution within the next few years.

 

Have a story tip? Message me at:

http://twitter.com/Cabe_Atwell

unspecified1.jpg

Researchers of CU-Boulder, MIT and UC Berkeley have successfully built a photonic microchip that uses light to transmit data. It has a bandwidth of 300 gigabits across a minute 3x6mm area and is the first of its kind. It may revolutionize data transmission forever. (via University of Colorado Boulder)


While Intel’s new computer processing chips have gained a reputation for packing unprecedented power and speed, researchers at The University of Colorado Boulder are reinventing how we execute data transmission. In collaboration with researchers from MIT and UC Berkeley, the team has successfully transmitted data using light instead of electricity.

 

Relying on light for data transmission is genius. The technology can send information over a larger distance using the same amount of energy electrical units take, which means standard microchips will require even less energy than they already do. With this, photonic technology has another significant advantage – multiple streams of data can be transmitted at once across different electromagnetic spectrums, i.e., colors of light, on the same fibers currently used to transmit data electronically. Basing microchip technology on photons, while recycling existing hardware, will thus revolutionize data transmission, by transmitting data faster and more energy-efficiently than any technology currently available.

 

The technology is based on infrared light, with a wavelength one-hundredth the thickness of a human hair, and shorter than one micron, Miloš Popović, an assistant professor in CU-Boulder’s Department of Electrical, Computer, and Energy Engineering, co-corresponded the study, told reporters at CU-Boulder. One a single microchip, the researchers successfully developed a functional photonic chip with a bandwidth density of 300 gigabits per second per square millimeter. This is up to 50 times greater bandwidth than anything currently available on the market.

 

The researchers successfully built a functional photonic microchip that mimics electricity-only design. The chip is 3 by 6mm and utilizes the same electronic circuitry of existing models. Its light-based transmission technology, however, allows it to have 850 optical I/O components, and the design can be mass-produced by existing manufacturing processes fairly smoothly. It is the only chip of its kind – the only processor in the world to transmit data using light.

 

The researchers are confident in the technology’s contribution to modern computing. Mark Wade, a CU-Boulder PhD candidate and co-lead author of the study, said the design solves the computing communication bottleneck of electric-only systems, while remaining streamline enough to be mass-produced. The research team has plans to sell the technology, and a start-up was created to do just that. Ayar Labs (formerly ObtiBit) will continue to operate independently, specializing in high-volume data transmission using energy-efficient technology. The start up also won the MIT Clean Energy Prize just last year.

 

We live in the age of information. With current computing speeds already nearing the physical limitations of electricity-based technology, our societal advancements are limited by our computing speed. According to John E. Howland of Trinity University, meteorologists are limited by slower computing speeds. Faster processing will have a direct impact on the natural sciences, and our ability to understand the world around us. Beyond faster gaming and data retrieval than we ever thought possible, artificial intelligence and science will advance beyond our wildest imaginations when faster processing speeds are possible. And now they are.

 

According to study researchers, manufacturers have begun streamlining processes to mass-produce photonic technology. It won’t be long before we see the direct benefits of what a limitless society can accomplish together. Rajeev Ram, a professor of electrical engineering at MIT, led the research team. The details of the study were published in the journal Nature.


See more news at:

http://twitter.com/Cabe_Atwell

rose.jpg

Ordinary roses or a living, renewable biofuel source? Possible both? A group of Swedish scientists have made an epic breakthrough by successful incorporating functioning circuitry into a living organism (in this case, a common rose). They recently released their discoveries which successfully caused ions within the rose’s leaves to light up. The next step is using electronic-organic plants to acts as biofuel power plants. (image via Panoramic Images)

 

It seems that technology has triumphed over nature once more – taking something once sublime and beautiful and turning it into a cold, calculated machine. Never before were scientists able to successfully combine organic plant matter and electronic circuits without killing the plant. Now, a Swedish group of researchers from of Linköping University have released their chilling findings in Science Advances. Their project started in 2012, after many unsuccessful attempts. It seems that this time they are on the right track with a breakthrough that may change our relationships with plants and the whole natural world forever.

 

It starts with a rose: a beautiful and temperamental plant whose only function is to look beautiful. However, why simply enjoy a thing of beauty when you can turn it into an instrument, perhaps the rose can serve as a radio transmitter, or renewable energy source instead of just sitting there; or at least that is what many scientists may think. The issue with combining plants and electronics was that scientists were trying to splice them together somehow, or combine the inorganic with the organic by inorganic means.

 

rose tech.png

A schematic of how their new technology works from their journal article (via Berggren et al., 2015, Science Advances, Vol. 1, no. 10)

 

The genius of Magnus Breggren and his team from Sweden is that they have discovered how to use the natural functions of the plant and its components to create electronic circuitry. They have used a synthetic polymer which they feed to the plant the way plants feed on water for nutrients. As the polymer makes its way up the vascular system of the rose stem, it becomes a part of the xylem, the leaves, the veins, and the signals of the rose. These components of the plant are then used as the main components of the circuitry which allow electronics and organic bodies to merge and act as one.

 

Their current synthetic polymer mixture creates a wire that’s up to 10 cm long inside of the stem (xylem) without impeding the rose’s ability to absorb water and nutrients. Via this method, the group of scientists was able to light ions within the leaves of the plants. Berggren was so surprised that their experiments have actually worked that he can’t wait to test out new projects: among them is a biofuel concept. Berggren told Motherboard, “Right now we are trying to put electrodes into the leaves with enzymes that we connect to the electrodes,” he said. “The sugar that is produced in the leaves is converted by the enzyme; they deliver a charge to the electrode and then hopefully we can collect that charge in a biofuel cell.”

 

This latest proposition could entirely change our relationship with plants, as forests could turn into renewable power plants for nearby cities. Berggren hopes that this new biofuel possibility will allow us to gain resources from our natural world without destroying it. However, how viable is the health of the rose in the long term? No one knows. It is still very early days, but there is no doubt that science is about to get weirder as electronics and plants can truly begin to meld into a cyborg technology for years to come.


See more news at:

http://twitter.com/Cabe_Atwell

engineersdem.jpg

This chip is a huge step forward in fiber optic communications. University of Colorado researchers combined electrons and photons within a single chip for this landmark development. (all images via University of Colorado & Glenn Asakawa)

 

Here is a claim and a wish I've heard for decades.


Advances in technology never cease to amaze no matter how big or small, but the University of Colorado takes the cake for best innovation of 2015. The university's researchers have created the first full-fledged processor that transmits data using light instead of electricity. This was done by successfully combining electrons and photons within a single microprocessor. So what does this all mean? It's a big development that could lead to ultrafast, low power data crunching. It also marks a major step for fiber optic communication.

 

To get the successful outcome, researchers put two processor cores with more than 70 million transistors and 850 photonic components on a chip. They were then able to create the processor in a foundry that produces high performance computer chips on a mass scale. This means the design can be easily and quickly made up for commercial production. Though the design isn't completely photonic the processor is still pretty impressive with an output of 300Gbps per square millimeter – 10 to 50 times the normal speed.


light_chip11ga.jpgmmc_07_revised_crop.jpg

(Left) "The light-enabled microprocessor, a 3 x 6 millimeter chip, installed on a circuit board." (Right) "Electrical signals are encoded on light waves in this optical transmitter consisting of a spoked ring modulator, monitoring photodiode (left) and light access port (bottom)"

 

Fiber optic communication is a big goal for many researchers and organizations due to its many advances. It supports greater bandwidth , carries data at higher speeds over larger distances, and uses less energy in general, which is good news for a society that aims to consume less power. There have been some advances in fiber optic technology, but up until now it has proven difficult to merge photonics and computer chips together. Now, these University of Colorado researchers have jumped over that hurdle.

 

But does the chip actually work? Researchers ran several test and showed that the chip was able to run various computer programs that required it to send and receive instructions and data from memory. This is how they were able to discover the chip had a bandwidth density of 300 Gbps.

 

“The advantage with optical is that with the same amount of power, you can go a few centimeters, a few meters or a few kilometers," said study co-lead author Chen Sun. "For high-speed electrical links, 1 meter is about the limit before you need repeaters to regenerate the electrical signal, and that quickly increases the amount of power needed. For an electrical signal to travel 1 kilometer, you'd need thousands of picojoules for each bit.”

 

If there's further advances in the technology not only will it mean posting Facebook statues as lightening fast speed, it also means data centers will be more green. According to the Natural Resources Defense Council, data centers used an estimated 91 billion kilowatt-hours of electricity in 2013, which is around 2 percent of electricity consumed in the United States. Considering those numbers, this is a great way to promote a greener society.

 

See more news at:

http://twitter.com/Cabe_Atwell


A team of researchers from Columbia Engineering, Seoul National University and Korea Research Institute of Standards and Science recently developed the world’s smallest lightbulb – at just one atom thick – using graphene. The structure may also revolutionize computing and chemical experimentation.  (via Columbia)


Graphene never stops to amaze. Take a look at everything written about the material here at element14, click here. A team of researchers from Columbia Engineering, Seoul National University and Korea Research Institute of Standards and Science recently created the world’s thinnest light bulb at just one atom thick. The micro bulb on a chip may revolutionize light displays, chemistry and computing. Researchers are currently further developing the technology for practical use in the near future.

 

Postdoctoral research scientist Young Duck Kim of James Hone’s team at Columbia Engineering headed the project. He and his team of researchers took the same principles of the incandescent light bulb and applied them to graphene to see if Thomas Edison’s world-changing invention could be updated.

 

The team placed the one-atom-thick pieces of graphene on a small strip with metal electrodes. They suspended the structure above the substrate and heated it by sending a current through filaments lining the contraption. To their surprise, as the graphene was heated, it became luminous – even to the naked eye. The structure is essentially the thinnest ever visible light bulb, but its potential for impacting numerous technologies is huge.

 

If the graphene light chip comes to market, it could play a critical role in enhancing the capabilities of photonic circuit technology. Photonic circuits are much like electrical circuits, but seek to rely upon light as a semiconducting heat source. In order for light to have enough energy to function properly, the light bulb filaments must be able to handle heat up to thousands of degrees Celsius. A chip that was both able to handle that level of heat and small enough to fit on a circuit board never existed, until now.

 

The micro light bulb on a chip may have other uses too. Since it can handle more than 2500 degrees Celsius, it may be used to heat tiny hot plates to observe high-temperature chemical reactions. The tiny bulbs are also see-through and can revolutionize commercial light displays as well. If the chips can turn off and on more quickly, they may have a future as computer bits as well.

 

Young and his team are continuing to expand upon the technology. It was a joint efforts between researchers from Columbia Engineering, Korea Research Institute of Standards and Science, Seoul National University, Konkuk University, Sogang Univeristy, Sejong University, Standford Univeristy and the University of Illinois at Urbana-Champaign. Read more about this achievement at Nature after this link...

 

C

See more news at:

http://twitter.com/Cabe_Atwell

self-destructing-chip.jpg

Made out of Gorilla Glass, the chip obliterates itself. This new chip shatters into thousands of pieces under extreme stress. (via Xerox PARC, pic via IDG.tv)


I was just thinking, there has to be a way to store data that will self-destruct upon access. Seems we are close to it.

 

The latest development from Xerox PARC engineers is a device straight out of a James Bond film. The team has created a chip that can explode into bits on command as part of the Defense Advanced Research Projects Agency's (DARPA) Vanishing Programmable Resources project. How does the chip get this shattering effect? It was made using Gorilla Glass, supposed used for smart phone screens, instead of plastic and metal. The glass was then modified to become tempered glass under extreme stress, which will case it to easily disintegrated when triggered.

 

In a demonstration, the chip reached breaking point by heat. A small resistor heated up and the glass shattered into a ton of tiny pieces. Even after it broke, the small fragments continued to break into even smaller pieces for tens of seconds afterward.

 

Is the chip supposed to just look cool? Even though the result is awesome, the chip can actually be a great security measure. It could be used to store sensitive data like encryption keys and can shatter into so many pieces it becomes impossible to reconstruct it. It's a pretty intense way to deal with electronic security, but it's a viable option if it happens to fall into the wrong hands.

 

The self-destructible chip was demonstrated in all its glory at DARPA's Wait, What? Event in St. Louis last week.

 

“The applications we are interested in are data security and things like that,” said Gregory Whiting, a senior scientist at PARC in Palo Alto, California. “We really wanted to come up with a system that was very rapid and compatible with commercial electronics.”

 

With so much information being stored electronically, more and more companies are employing similar techniques for security. Similar technology is used for Snapchat, which lets users send images to friends for a short amount of time before the message can no longer be accessed. And Gmail recently introduced the “Undo Send” feature that allows people to cancel sent emails, but it's limited to 30 messages. Now, if only we could make our phones explode when they get stolen.

 

PARC is a Xeorx company that provides tech services, expertise, and intellectual property to various companies including Fortune 500 businesses, startups, and government agencies.


 

C

See more news at:

http://twitter.com/Cabe_Atwell

Printed Circuit Boards (PCBs) are without a doubt central to all electronics. As technology advances, however, PCBs must be made faster and smaller than ever before. Before you get busy, make sure you nip sloppy PCB production in the bud, before it costs you big bucks. Read on to discover the 12 biggest PCB development mistakes and how to avoid them.

 

Layout

 

1. Improper Planning

 

Have you ever heard “proper planning prevents poor performance?” It’s true. There’s a reason we consider poor planning the number one PCB development mistake. There is no substitute for proper PCB planning. It can save you time and energy. If you build it wrong, you will have to spend additional resources to go back and fix it. How do you plan properly? Consider numbers 2-6 on our list before you physically begin building. You’ll be thankful you did.

 

2. Incorrect Design

 

There is an infinite number of layout possibilities with PCBs. Keep function in-mind when designing the form. For example, if there’s a good change you’ll need to add on in the future, you may want to consider something like a ball grid array (BGA), which can help conserve space on an existing board to enable you to build upon that design in the future. If your design must incorporate copper, you’d be best going with a polygon-style design. Whatever your function, choose the right form.

 

3. Improper Board Size

 

It’s much easier to begin with the right size first. Although the portion of the project you’re working on now may only require a small board, if you’re going to have to add on in the future, you’re better off getting the larger board now. Stringing multiple boards together may be difficult due to potential circuitry and connectivity issues. Plan adequately not only for current function but future function so you save yourself time and money.

 

4. Failing to Group Like Items

 

Grounding your PCB is a critical part of production. Grouping like items will not only help you keep your trace lengths short (another important element of design), but it will also help you avoid circuitry issues, ease testing and make error correction much more simple.

 

5. Software, Software, Software

 

We know you can design a PCB from scratch, but why would you want to when you can use software? Software makes your life easier. Electronic Design Automation gives you recommendations for the best layout to choose and other programs may suggest the best materials to use, based on prospective board function. Software won’t do all of the thinking for you, but it sure does help.

 

6. Using the Silk Screen Improperly

 

A huge ally when creating a design for a PCB board is the silk screen. When used properly, it’s a great tool that allows you to map out all aspects of your PCB before construction, including circuitry planning. However, be careful and maintain best practices. When used improperly, the silk screen can make it difficult to know where connectors and components are supposed to go. Use full words as descriptors when possible, or keep a key of your symbols nearby.

 

 

Building

 

Once you’re done planning, you can begin building your board. You’re still not out of the water, however. Building is another area where people make costly mistakes. When done well, however, you can build PCBs faster than ever. 

 

7. Poorly-Constructed Via-in-Pad

 

This issue is one of the biggest detriments to proper PCB development. Many boards now require via-in-pads, but when soldered incorrectly, vias can lead to breakouts in your ground plane. This creates a larger circuitry issue, as power travels between boards instead of connectors and components. Test your ground plane. If you suspect you have a shaky via-in-pad on your board, cap or mask it and test it again. It may slow down production now, but it’ll save you time in the long run.

 

8. Using the Wrong Materials

 

Although this mistake may seem like a novice move, it happens. PCBs can be constructed using various materials. Know the purpose for which you are building your board, and which materials are best for that design, before you start building. If you’re building an FR-2 grade, single-sided PCB, you can use phenolic or paper materials. Anything more complex, however, should use epoxy or glass cloth. Also, different materials have different temperaments. Keep this in mind. If you’re building a simple design that needs to hold up in an area with a lot of humidity, it may be worth it to go with epoxy. 

 

9. Too Lazy to Test It

 

If there’s one habit you begin to change, it should be the frequency you test your prototype. Assuming your board is grounded and that circuits will function in perfect accordance with their potential ground paths and voltages is asking for trouble. We know it takes time to test your board, but it’ll take more time to find and correct an error as time goes on. Test it now. Every design has an issue, keep that thought in mind.

 

 

Manufacturing

 

So you properly planned and built your board. Things couldn’t still go wrong, could they? Wrong! They can and they do. These are the three mistakes to avoid.

 

10. Failing to Double Crunch the Numbers

 

We’ve all felt the pressure of an upcoming product deadline. You’re sweating, over-caffeinated and running on lack of sleep. We know you’re an engineer, but don’t let your ego cost your company huge amounts of money due to an error. Always double-check your numbers before sending your model to production. This includes testing your board, ensuring the size is in line with your client’s specifications and double-checking your design is ideal for the intended function. It’s always better to have one model that needs to be reworked than a thousand. Rewind to #1 in this list… proper planning. Never jump the gun when sending out the design.

 

11. Temperature Control

 

This step is often neglected, but it’s important. Even if you do everything right leading to the production process, you will ruin your boards if you neglect temperature during development and storage. Every step in the process must factor in temperature. Soldering in cold temperatures, for example, often leads to poor connections. Likewise, storing boards in extreme heat or humidity may damage components and the board itself. At every step in the process, consider temperature and ensure its working for you.

 

12. Communicate

 

Building PCBs can be fun, if you create functional boards at the end of the grueling process. So you designed your board well and followed best practices during production – you’re in the clear, right? Not always. Ensure you properly communicate with your clients at all times. It sounds simple, but what’s said isn’t always what’s heard. Your finished product can be rejected. Save yourself a step by making sure you’re creating what your client wants at every step in the process, so you can move onto more fun things, like making a paper airplane machine gun.


C

See more news at:

http://twitter.com/Cabe_Atwell

Filter Blog

By date: By tag: