1 2 3 4 Previous Next

News

1,507 posts

ara1blogpost.png

Motorola’s Project Ara. Building your own smartphone seems like a dream. I hope this catches on. (via Motorola)

 

Every smartphone has features some users do not want or could do without. It is in that sense that Motorola has turned to designing a modular smartphone, which would allow its users to connect the hardware they prefer for the applications they use. Known as Project Ara, the idea is to design modular pieces that consist of certain hardware elements such as Wi-Fi, connection ports or even keyboards that connect to one another on a basic platform. The idea for the modular phone came from (the now defunct) Phoneblok project that would allow users to pick and choose their own hardware that fastened to a connectable base. Actually, Motorola teamed up with the creators of Phonebloks for their Project Ara and are currently looking to employ what they call ‘Ara Scouts’ to help design the project’s modular pieces.

 

The possibility of a modular smartphone is certainly incredible. Imagine being able to switch out cameras when a newer version comes along without the need to replace your entire phone or easily replacing a damaged speaker (think the EVO 4G) without the need to get it serviced. Unlike the original Phoneblok design, Ara will use an ‘endoskeleton’ (known as endo) that holds the modules in place, which could include everything from processors, different displays or an extra battery (incredibly convenient). While the prospect of a truly modular phone is only in the planning stages at the moment, Motorola will be releasing a MDK (Mobile Developers Kit) to those who signed up to be an Ara Scout as early as the end of this year. That would mean that a fully modular smartphone could hit the market as early as the middle of next year, however until then we will still have to use our non-modular phones.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

timaxlife ti.jpg

MaxLife concept diagram. Lifetime cycles are increased, but not a big improvement in energy density (via TI)

 

Texas Instruments is one of the most dominant technology companies ever. Behind Intel and Samsung, it is the world's third largest producer of semiconductors. In addition, they are the largest manufacturer of digital signal processors and analog semiconductors. Young students may just know of TI as producers of their world famous graphing calculators. However, for the older, more experienced students, they quickly learn TI has technology that can be found everywhere. In fact, many of the ICs used for basic electronics are all created by TI.

 

There is also one additional area TI's technology excels at. That would be in energy efficient electronics. One of the more popular devices is the MSP 430 microcontroller family. These MCUs allow developers to create embedded applications, which can manage power extremely efficient. The CPU can work with speeds up to 25 MHz or can be lowered to save power in applications. More importantly, the MCU has a low power idle mode. When working in this mode the CPU will draw as little as 1 micro-Amp of current. Along with the low power capabilities, this MCU can also work with all the usual embedded electronics communication protocols and peripherals.

 

As of late, TI has been trying their hand at a new energy saving technology. That would be battery management chips. Back in March, they released their bq2419x family of chips, which were claimed to have the potential to reduce charging times to half their current lengths. This was an extremely demanding technology which many companies wanted a piece of. This is largely due in part to the emergence of tablets and smart phones. All Android users are well aware of the battery draining apps we all so often use. TI is the company looking to provide a solution to ease all of our frustrations.

 

As of recently, TI has announced the release of a few more energy efficient chips. Collectively, they are known as MaxLife chip sets. These include bq27530 and bq27531 fuel gauge circuits, which will be working alongside the bq2416x and bq2419x chargers. Together they are expected to provide faster charging times and increase the longevity of batteries by up to 30 percent. The charger is directly controlled through an autonomous battery management system, which provides users with greater flexibility. For example, due to the autonomous control there is less software overhead to help designers integrate it more easily into systems. Additionally, it provides better thermal management and battery safety.

 

The MaxLife technology from TI is now available in a development kit. The development kit features a bq27531 fuel gauge connected via I2C to a bq24192 charger. Using such combinations charging up to 4.5Amps can be achieved for single cell lithium ion batteries. This is one of the first successful technologies, which will allow batteries to charge faster without damaging the battery. I do not believe it will be long before we see these chips integrated into consumer products.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

castar.jpg

Artist concept of the finished castAR glasses (via castAR kickstarter page)

 

Augmented and virtual reality headwear has risen in popularity ever since Google Glass and the Oculus Rift hit the market. Those glasses and head-mounted displays either overlay interactive applications on the direct or indirect environment being viewed or create a computer-generated environment putting its user in that simulated world. Those two perspectives are usually separate from each other and integrated into a single design rather than both being on the same device. This is usually do to the fact there is too much hardware involved to be packed into a small space, however one company has seemingly managed to incorporate both AR and VR applications into a simple pair of glasses. Just 20 hours after posting their proposition on Kickstarter, Technical Illusions reached their funding goal of $400,000 for their castAR AR & VR system.


Ex Valve employees, Jeri Ellsworth and Rick Johnson, designed the glasses (first developed for Valve’s castAR project) so that users are able to see 3D holographic projections situated right in front of them. This is done using two micro-projectors housed on both sides of the glasses frames, with each projecting a portion of the image on a flat surface. The users eyes are able to focus naturally on the integrated images thereby eliminating the dizziness and nausea experienced using other headsets while gaming (Oculus is notorious for this). The system uses active shutter glasses that have a 120Hz refresh rate, which is necessary to view 3D video and images (higher would be better however). The glasses work by projecting the images (@ 720p resolution) onto their specialized retro-reflective RFID tracking surface that uses IR markers, which are then projected back to the systems glasses with the images/video piped to the lenses through an HDMI connection. A camera (connected via USB) housed in the center of the glasses frames scans the surface for IR LEDs (built into the surface mat) that tracks the users head, which specialized software then adjusts the image depending on the viewed angle. A simple clip-on attachment allows users to convert the glasses from projected AR into true augmented reality (used without the mat) or full virtual reality similar to the Oculus Rift.

 

Another interesting add-on for the castAR system includes the use of their ‘Magic Wand’ controller, which has an IR marker situated on its tip that allows users to use the device as either a gaming joystick of sorts or a 3D input device. It is also outfitted with several buttons, an analog joystick as well as a trigger that allows for additional options with multiple applications. Gaming with the castAR system isn’t limited to just video games, as the RFID mat can be used for board games as well. Users can affix RFID bases to game tokens or miniatures, like those from Dungeons and Dragons or MechWarrior, which can show vital or crucial information about the player on the virtual board. The board can also be created and configured using the company’s castAR software suite, which allows for online gaming as well, so friends can play against one another over an internet connection. Those looking to get their hands on one of the castAR systems can do so through a pledge of $189 and up, which nets you a pair of castAR glasses and a 1-meter X one-meter surface pad. $285 gets users the whole package including glasses, mat, AR & VR clip-on and Magic Wand.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

iPadKid.jpg

I would have loved a tablet when I was his age... (via LAUSD)

 

Earlier this month (October 2013), the Los Angeles Unified School District unveiled their ambitious proposal to introduce a series of tools designed to raise academic standards needed for students to succeed in college or the skills needed for a career. The $1 billion dollar project aims at giving every student (more than 600,000) in the district an iPad tablet in an effort to build 21st century skills revolving around technology to better prepare them for the future. While the initiative is designed to ‘level the playing field’ for both wealthy and underprivileged children (allowing them access to the same opportunities) the roll out of the Project has had a few shortcomings. First, the parents of those children wanted to know why the students were not being taught traditional vocational skills such as machine shop, while others queried about school board politics and priorities rather than anything about the technology or the Project itself.

 

The district’s Technology Project is in conjunction with the nation’s Common Core States Standards Initiative (for children in K through 12), which provides a set standard for mathematics and English language arts. The second issue stems from the tablets themselves (purchased at $678 each), as some schools are not equipped with the Wi-Fi needed to download the educational material in class. These schools will need to be upgraded so the children have access to the related content, which may or may not have been included in the billion-dollar budget. As the first batch of tablets were rolled out to roughly 47,000 students who were allowed to take them home, the district found that some of those students were technology savvy and quickly disabled the tablet’s firewall allowing them to surf the web freely and visit sites deemed inappropriate by school standards (suffice it to say, they weren’t learning math). This presented a liability to the district, as students could become the victims of sexual predators while using school property.

 

The schools quickly remedied that problem by restricting the tablets to in-school use only until it finds a solution around those hacking endeavors. Another issue underlying the iPads is how the district will repair or replace the tablets if they become damaged. Apple has stated that it will replace 5% of those that no longer function, leaving the schools to find their own solution for the other 95%. The problems don’t stop there as L.A. Unified forgot to factor in the necessary training that some teachers would need to use the iPads in their curriculum, as some have never used one or any other tablet for that matter. Other issues such as theft and the inclusion of keyboards for classwork have also become factors that need to be addressed. While the Unified District has slapped a Band-Aid on a few of those cracks, it will need to do a lot more, and soon, before the dam breaches and it takes more than $1-billion to fix those problems. Still, the thinking was in the right direction, as students will undoubtedly need to obtain technology related skills if they’re to succeed in the future.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

First, I am going to suggest a soundtrack for reading this post… Please hit play below and move on:

 

 

Touchscreens are becoming the user interface of the future. It began with smart phones, then came the iPod Touch and then tablets. Now we can find touchscreens in many places. Some restaurants have small touchscreen computers which let diners flip through the menu and check the balance of their current bill. Newer apartment and building complexes are integrating touchscreen directories in the front of the building to get in contact with the person of interest. In addition, many computers and laptops are offering the choice of a touchscreen monitor for an increased user experience. With that said, many new touch technologies are beginning to emerge. So let’s take a look at some of these upcoming innovations and catch a glimpse into the future of the touchscreen interface.

 

ultrahaptics.JPG

(via Ultrahaptics)

 

To begin with we have a new haptic feedback system being developed by researchers from the University of Bristol in the UK. This system integrates a leap motion controller along with ultrasound speakers placed behind a display. While the leap motion controller is in place to track the user’s fingers and hand gestures, the ultrasound speakers provide a sensation to the fingers to give users feedback that will stimulate the sense of touch. The system is called UltraHaptics and consists of an array of 320 of the previously mentioned ultrasound speakers.

 

Tom Carter, a lead researcher involved in the work stated, “What you feel is a vibration. The ultrasound exerts a force on your skin, slightly displacing it. We then turn this on and off at a frequency suited to the receptors in your hand so that you feel the vibration. A 4-Hertz vibration feels like heavy raindrops on your hand. At around 125Hz it feels like you are touching foam and at 250Hz you get a strong buzz.”

 

 

A similar technology being developed by Disney Research is one that involves electrostatic forces to simulate a sense of touch. Rather than using sound waves to compress the skin and provide the feeling of a textured surface, Disney's researchers have been employing electrovibrations which can stretch and compress the skin. The vibrations have been used to create the same type of lateral friction one would experience when sliding their finger across a bump.

 

“Our brain perceives the 3D bump on a surface mostly from information that it receives via skin stretching,” said Ivan Poupyrev, who directs Disney Research, Pittsburgh's Interaction Group. “Therefore, if we can artificially stretch skin on a finger as it slides on the touch screen, the brain will be fooled into thinking an actual physical bump is on a touch screen even though the touch surface is completely smooth.”

 

 

Both of the aforementioned technologies are hoping to enhance the user experience and create more tactile-rich displays. On the other hand, Japanese company, AsukaNet, is developing an Aerial Imaging Plate. Very similar to what a hologram is like, the system would project an image which will appear to be floating in front of a user. The user can then navigate through menus or anything a touchscreen interface may be used for. To accomplish this task a tablet interface is used alongside reflective surfaces which project the image in front of a user at a 45 degree angle. Furthermore, the user must also stand in a specific position in front of the display. If not, the image will appear as a normal flat surfaced picture. The company mentions this feature can be used as an advantage in scenarios in which privacy is important. For instance, when interacting with an ATM this feature could increase the privacy since only the user in front of the display would be able to see which numbers they are poking at. Also the projected display increases the sanitation many interfaces lack. Since there will be no physical contact involved, germs and viruses cannot be transmitted from one person to another. 

 

eyetoy.jpg

Printed toy with a little optical imaging sensor built in... (via Disney Research)

 

The last of the new and innovating touch technologies we can expect to be seeing in the future is curved touch screens. One of these comes from Disney Research and is made during the process of 3D printing. Thanks to a light sensitive plastic, known as photopolymer, optical fibers can be printed alongside the main structure of a 3D object. What this allows the engineers to do is connect the optical fibers to an image source which can then transmit the information to and from a curved surface. Disney has already created many prototypes, many of them cartoon-shaped creatures with large eyes that look around the room.

 

lgflex.jpgLGGFlex.jpg

The bendy screen on the left. Possible image of the actual LG G Flax phone on its way to the market. (via LG)

 

LG has also announced a flexible display which is set to be released next year. It is a 6 inch flexible OLED screen which LG claims to be indestructible. The display is going to made from aplastic substrate while also consisting of layers of a film-type encapsulation and a protection film. Overall the display is going to be only 0.44mm thick, weigh only 7.2 grams, and possess a vertical concave radius of 700mm.

 

“LG Display is launching a new era of flexible displays for smartphones with its industry leading technology,” said Dr. Sang Deog Yeo, Executive Vice President and Chief Technology Officer of LG Display. “The flexible display market is expected to grow quickly as this technology is expected to expand further into diverse applications including automotive displays, tablets, and wearable devices. Our goal is to take an early lead in the flexible display market by introducing new products with enhanced performance and differentiated designs next year.”

 

LG will not be the only ones in the flexible display market. Samsung has also been showing off some prototypes and experimenting with technologies of their own. We can expect these technologies to hit the market some time late next year. However, CES 2014 is also right around the corner. As the world's largest gathering place of consumer electronics, it is almost certain we will be seeing many more innovative display technologies.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

Arduinos are awesome for what they have introduced to the world. They have allowed young and old people to easily learn and adapt to working with microcontrollers. They have made computing and digital control available to everyone and have kept everything open sourced along the way. As a result, they now have one of the largest communities in the world of DIY. However, Arduinos have their limitations, and with the “internet of things” slowly working its way into our lives, Arduino does not want to be one step behind.

 

With that said; two new boards are on the way from Arduino. They are the Arduino Tre and the Galileo. These boards will feature processors capable of handling Linux applications which will throw the Arduino right into the competition with single board computers (SBCs).

 

ArduinoTre_LandingPage.jpg

Arduino Tre... I suspect this board may be on element14 some day soon. (via Arduino)

 

To start with the Tre, it is going to feature a 1–GHz Texas Instruments Sitara AM355x processor (ARM Cortex A-8). If this processor sounds familiar that's because it may be to a lot of people. This is the same processor which the Beagle Bone Black features. Indeed, the Arduino Tre is the result of a close collaboration with the BeagleBoard.org foundation. As a result, the board will also have tons of support and help already available for working with the A-8 processor.

 

“By choosing TI's Sitara AM355x processor to power the Arduino Tre, We're enabling customers to leverage the capabilities of an exponentially faster processor running full Linux. Our customers now have a scalable portfolio at their fingertips, from the microcontroller – based Uno to the Tre Linux computer,” commented Massimo Banzi, co-founder of Arduino.

 

The board will come in an architecture which will look like a normal Uno board in the middle, with expanded pins and connectivity around the outside. This design will allow all of the current and previous shields to be compatible with the Tre as well as expand upon previous projects one may have worked on. In addition, it looks like the board is going to feature pins for Zigbee compatibility in the middle.

 

The board will feature lots of GPIO pins along with 1080P HDMI support and high definition input/output audio. An Ethernet 10/100 port will be available along with connection for LCD expansion. Furthermore, as with many other SBCs, a SD card slot will be available for storage which usually holds the system image. One interesting note is that this will be the first and only -Arduino board so far which will be manufactured in the United States.

 

IntelGalileo_fabD_Front_450px.jpg

Arduino Galileo, Intel joins the fray... (via Arduino)

 

Moving on to the Galileo, this will be a board made in collaboration with Intel and Arduino. It will feature an Intel Quark SoC X1000 application processor. This is one of the first Intel chips designed for SBCs which will be aimed at a market previously dominated by ARM chips.

 

“Intel Galileo features the Intel Quark SoC X1000, the first product from the Intel Quark technology family of low-power, small core products,” an Intel representative said. “Intel Quark technology will extend Intel architecture into rapidly growing areas-from the Internet of Things to wearable computing in the future.”

 

The Galileo board, like the Tre version, will feature a pin out which resembles the Uno version of the Arduino board. However, all of the pins will operate at 3.3V and will have a jumper available which will make all the voltages at the pins 5V if needed. Since the previous and current Arduino shields will be compatible, this feature will be needed if using with a shield. In addition to the Uno's features, the Galileo will have a mini – PCI Express slot, 100Mb Ethernet port, Micro-SD slot, RS-232 serial port, and 8MB of NAND flash memory.

 

Arduino will now be a major player in the SBC market. The Arduino has already made a prominent name foritself in the microcontroller world, and that huge community will likely be passed on to their SBC boards. One major advantage the Intel based board will bring to the market is support for x86 architecture. In addition, Intel CEO, Brian Krzanich has announced they will be donating 50,000 boards to over 1,000 universities worldwide. This will definitely kick things off on the right direction. The Galileo is set to be released in November and sell for under $60. The Arduino Tre is going to be released in Spring of 2014 sometime and a price has not yet been announced. Nevertheless, makers and the DIY communities now have a plethora of boards to choose from and experiment with. Which one will you choose?

 

C

See more news at:

http://twitter.com/Cabe_Atwell


 

Nine Inch Nails (NIN) like to put on some visually stunning shows. For their next tour around the world, they will be using some new tech to improve their visual effects. One of those new technologies being employed is a Microsoft Kinect which will track the bands movements and project that video onto mobile screens on stage. In addition, standard video cameras and thermal video cameras will be used to capture additional footage. The mobile screens on stage will display all types of visuals and give the illusion of the band disappearing and re-appearing throughout the show. The show will be the result of a collaboration between the band's group leader Trent Reznor, lighting designer Roy Bennett, and artistic director Rob Sheridan.

 

Furthermore, NIN is also trying their hand in a different type of recording. One of their latest albums, Hesitation Marks, will be recorded twice, once in the standard “loud” recording and one in an alternate “audiophile” mastering. The band says the differences will be subtle to most people, but the audiophile version will sound slightly different on high end equipment and may be preferred by those with an understanding of the mastering process.

 

Mastering Engineer Tom Baker adds, “I believe it was Trent's idea to master the album two different ways, and to my knowledge it has never been done before. The standard version is “loud” and more aggressive and has more of a bite or edge to the sound with a tighter low end. The Audiophile Mastered Version highlights the mixes as they are without compromising the dynamics and low end, and not being concerned about how “loud” the album would be. The goal was to simply allow the mixes to retain the spatial relationship between instruments and the robust, grandoise sound.”

 

One of the more interesting moves the band has made in the past was crowd sourcing 400GB worth of video footage from shows played throughout a tour. The tour, “Lights in the Sky,” took place in 2009 and after their plans for a film had to be canceled, they decided it would be better to just put the video footage online for people to mix up on their own. The HD footage was totally unedited and not anything for recreational viewing at all. However, their motives were that some people with extra time on their hands and some editing skills would put together their own versions of a film for fans to view. As we can see NIN has some clever moves up their sleeves for keeping their fans pleased. It will be interesting to see how their new tour works out with all the visual effects being employed. It is highly likely that more bands will be taking notes and using technology for their own uses and effects.

 

C

See more news at:

http://twitter.com/Cabe_Atwell


Searching for parts: easily one of the most important tasks when creating a new design.  Get lazy here, and it will haunt designs later.  The importance is paired with the significant amount of time that it takes to get just the right parts selected. This is probably why engineers everywhere fight specification changes once the parts are defined. It's a fine balance for each part to work with one another in concert.

 

Given the importance of the part selection process, surely there are many different tools out there to help discover, keep track of part possibilities, and manage notes that one comes up with while searching.  But sadly, the only tools that exist are the search tools provided by the part distributors.  These are great for finding parts based on a wide array of specs, however each search is merely a way to find information.  While some CAD tools like Cadence and Synopsys have solutions, they are expensive and generally only accesible to engineers at large companies.  Beyond that, taking notes and remembering parts is a job usually ascribed to Excel or a paper notebook.  That's a great solution... for the year 1998!


That's where Frame-It steps in.  Their team noticed that the wealth of information collected during part selection needed an organized location and digital notepad.  From there, they created Frame-It.  It's a Google Chrome extension that will save webpages or documents that are being browsed while allowing the user to take notes on the content.  The content can also be named and organized with folders and tags which makes it easy to find later:

 

Capture View.jpg

 

Where does the saved data go?  Straight to the user's frame-it account in the cloud, all without disrupting the part exploration process.  When users later say, “Oh! I was just looking at a part last week that might help!” they can refer to their Frame-It workbench and review all of their data:

Workbench View.jpg

It's one of those products that is so simple, clean, and straightforward that one might wonder why it hasn't been mainstream for years.

 

As with any totally new product launch, there are a few things that make it tricky to use.  First, if you're taking notes and switch to another tab, it will erase the work-in-progress.  (I use 2 different browser windows to account for this).  Also, scrolling through a document can only be done with a mouse scroll wheel instead of the side bar.  A few quirks, but certainly easy to temporarily accept.

 

I have been using it for over a year on a few projects and have found it to be a remarkable tool for capturing information and accessing it later.  It is fast, unobstructive, and builds a wonderful personal repository for future projects.  I encourage you to give it a shot!

Class Photo.jpg If you've been in electronics for any time at all, it is easy to see a pattern develop between hardware and firmware/software engineers.  As soon as a problem crops up, it is instantly clear to the hardware team that it is the code that needs fixing.  However when asking the software engineers, they confidently state that the hardware is at fault.   When pressed for details, both sides will end up saying something like, “Well, I don't know what is wrong, but I know it can't be related to my part of the design because of _______.”

 

How is it that these assumptions are being made time and time again about the source of the problem coming from the other side?  Much of it probably stems from human nature's inclination to toss any problem over the wall and hope someone else fixes it.  This is likely strengthened by an engineer's confidence in a design they have spent untold hours creating.  But another source is likely rooted in a lack of understanding how the other side of hardware/software works.

 

How can we prevent this unproductive riff raff?  It might help to have anyone involved in electronics (either the hardware or software side) take at least one course in microcontroller design to show the connections (and problems) that occur between hardware and software.   Any piece of circuitry will eventually need to be controlled or communicate with software, and software usually involves the real world at some point.  The most remarkable A/D circuit is useless if the communication bus that the digital signal must pass over does not have the required bandwidth.  Similarly, a beautiful chunk of code written to control an RGB LED matrix won't work if the hardware isn't designed to supply the required amount of power.  A course that forces the engineer to face problems on both sides can be humbling; for example a hardware engineer might spend hours troubleshooting his or her code only to find that the motor was connected to the wrong power rail.

 

An example of such a course, ECE4760, taught by Prof. Batten at Cornell University is now available on YouTube and offers a wealth of information that would greatly help anyone designing in the Electrical Engineering space.  It could be replicated at home thanks to the $10 MSP430 Launchpad and the free development tools.  And while an outside observer won't be able to benefit from in-person instruction, they will be able to easily engage the communities like Element14's MSP430 group or TI's E2E forum.  The best part is that the course is project based, giving students an opportunity to learn the difference between 'This should work' and 'This does work.'

 

Of course this is just one suggestion to give each side a glimpse of the other.  After 5 years in the field any lessons of humility will be mostly forgotten.  Have you noticed the “it's their problem” exchange before?  What do you think would help with it?  Please share your thoughts in the comments.  In the end, we're all on the same team!

LED in LA.jpg

Clinton Climate Initiative LED streetlight... Los Angeles - now the "City of LED Lights" (via LA Bureau of Street Lighting)

 

 

Street lighting has been around since the ancient Romans and Greeks used them for safety purposes (tripping over obstacles, etc.) and to keep thieves at bay. Those were of course oil lamps and were in use all over the globe until 1875 when Russian inventor Pavel Yablochkov introduced his ‘Yablochkov Candle’ to the world. The first use of his electric lamp was in Paris where 80 of them were deployed to light the Grand Magasins du Louvre department store, which is where the city subsequently earned its nickname ‘The City of Lights’. Since that time, electric incandescent street lights in one form or another have been used to illuminate highways and on/off ramps for over 100 years, however this is about to change.

 

LEDs have proven to be much more efficient  than traditional incandescent lamps when it comes to power use, so it was only natural that the Los Angeles Bureau of Street Lighting used them to replace the city’s aging streetlights. The city’s Light Emitting Diode Street Lighting Retrofit (part of the Clinton Climate Initiative), has replaced over 140,000 of their streetlights with more efficient LED units over the past four years. As a result of switching over to LEDs, the city’s energy use for lighting has been reduced by over 62% and carbon emissions were reduced by over 47,000 metric tons per year. The city’s light pollution has also been reduced due to using white LEDs, which is also getting high-praise from the LAPD as well as the Dark Skies Association (the premier authority on light pollution). Los Angeles isn’t the only city to make the transformation over to LED street lighting, as Las Vegas replaced 42,000 lights back in May of this year and Austin is looking to replace 35,000 of their lights. San Antonio is set to install 20,000 LED lights as well, as the trend to go green is taking the US by storm.

 

The adoption of LED lighting has been steadily increasing over the years from 3 million units in the beginning of 2012 and is expected to grow to over 17 million by the year 2020. This is to be expected as the demand for energy has been increasing over the last few decades as more countries have been expanding their infrastructure. The benefits of using LED lighting are obvious, LA alone has reduced its electric bill by 7 million dollars with an additional savings of 2.5 million through reduced maintenance expenditures. The city isn’t finished with replacing the defunct high-pressure sodium fixtures, as they plan to switch out an additional 70,000 units with the second phase of their initiative, which is expected to be completed by 2014. The total budget to replace those fixtures tops out at $56.9 million, which may sound like a lot of money but considering the savings that will accumulate over a period of a few years will save the city money in the long run.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

I love attending events like last week's Denver Startup Week, Boulder Startup Week, BLUR conference, and various entrepreneurial meetups around the country. Spending time with this collection of interesting people who are passionate about what they do is probably the best way to learn something, be inspired, and enjoy an evening. While I've been going to them for years, 2013 has shown something unique: an affinity for hardware startups.

 

What is going on?  It isn't like the well of strong software and web tech companies that have lots of promise is drying up.  To start, one should understand the difficulties that have traditionally faced hardware startups.  The ability to quickly prototype, test, and change hardware is certainly more expensive and time consuming than recompiling code due to an Office Space style 'Monday Mistake', “Oops, I shorted an inner-layer trace; better order a new set of boards.  See you in a couple weeks!”  Worse than the actual mistake is the constant risk of such an error at each revision release.  Then with the working protos up and running, there are either too few that are working at any given time, or so many that it is impossible to track hardware changes across all units.  Once all development is completed, the idea of scaling hardware is daunting.  Software can grow to 1 million users quickly, while building a million units is no small task with serious risks:

SW HW dev.jpg Given all of the challenges above, why the migration from hardware aversion to hardware appreciation, especially among sources of capital such as VC firms?  First, the feedback loop is getting shorter and shorter. Falling prices of decent rapid prototyping tools like 3D printing, PCB fab and assembly, and the rise of hardware development boards like those sold by chip manufacturers, adafruit, and sparkfun a startup can get a prototype working at very little cost.  Early prototypes can be designed for hackability, making hardware revisions a simple use of common tools such as wire, soldering irons, a drills, and a hobby knife.  Further speeding hardware development is the significant growth in digital hardware such as microcontrollers and FPGAs.  As digital chips swallow up more hardware functions, the development is shifted to into the realm of being able to change a design with a firmware update.  Finally, the uncertainty of market demand when preparing for manufacturing can be mitigated with preorders enabled by crowdfunding.  Fast following customers will still have to wait for the second production run, but manufacturing planning for early adopters gain the benefit of matching supply to demand on the first run.

 

For all of the midnight engineers that have been tinkering for years waiting for the business environment to become friendly: now is the time!  Get a prototype built up that can wow an audience and join me at events such as local startup events, Maker Faires, and entreprenurial meetups.  Talk to people in order to see if your idea has any legs.  The capital is out there now for teams that have the idea and the ability to take it seriously, so get to it!

toothsensor.JPG

Spy level secret sensors everywhere... can we trust anyone? (via National Taiwan University)

 

Remember when the dentist told you to lay off the crunchy sweets? Have you ever wondered just how much you talk, eat or drink? Soon a little implanted sensor that fits aside your molar could tell you and your physician a lot about your oral activity. Researcher Hao-Hua Chu and his team from the National Taiwan University at Taipei have begun to develop such a device and have made great progress in teaching it to recognize oral activities like coughing and eating. It could one day tell you if you grind your teeth in your sleep via your smartphone.

 

The device consists of a tiny 4.5mm x 10mm circuit containing a tri-axial accelerometer, coated in dental resin, and currently fits inside dentures or a dental brace; in time it is expected to also fit in a tiny tooth cavity or a crown. This prototype is powered using an external battery but the team is searching for rechargeable micro-batteries that will get rid of the necessary wires. As mentioned above, the Teeth Probe will one day communicate with your smartphone, using Bluetooth. The effects of propagating microwaves on oral tissue has not been extensively studied so integrating this tech into the probe will have to wait till favorable research is obtained.

 

The team is experimenting with different software algorithms that will teach the probe to recognize actions. The team experimented using C4.5 Decision Tree (C4.5 DT), Multivariate Logistic Regression (MVR), and Support Vector Machine (SVM) to help the device turn accelerometer readings into actions. They also applied these three methods to two models: one person-dependent model which learned personal movements and a more general person-independent model which tried to gather info from 7 participants in order to detect the oral activity of the 8th participant.

 

The results showed that the most effective algorithm to use was the SVM to detect talking, eating, drinking and coughing. The performance of the SVM algorithm was much better in the personal-dependent model, scoring over 90% correct recognition or more for all activities.

 

The SVM, person-independent approach was far less effective at recognizing activity, resulting in about 60% accuracy. The team says this is due to a couple different reasons. First, every person has a different oral structure, which means that the device will always be implanted in a different location and thus changes how the dental probe detects motion. Another reason is that every person performs these activities at different rates and different levels of force. Taking these factors into account, the team will be able to improve the general person-independent model.

 

The team wants to use the device to study people who clench or grind their teeth. For others, it could monitor the frequency of different oral activities, “mouth motion” data that can be sent, stored and shared through a patient’s smartphone. This could help patients and doctors assess the effectiveness or need for different dental interventions.

 

The team’s results will be made available at this year’s International Symposium on Wearable Computers in Zurich.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

 

CadSoft Computer participated at OHM 2013 in early August. The 5 day festival is one of Europe’s most popular hackers and makers events and took place in the Netherlands, north of Amsterdam.

CadSoft appeared as a sponsor of the event and was represented by André Schmeets, one of Farnell element14’s EAGLE experts, who ran two lectures about CadSoft’s EAGLE software throughout the event.

He commented: “Participating at Ohm was a great experience: 3.000 participants in a tent city where all sessions, lectures and discussions took place. The relaxed atmosphere is fascinating, with a few tents even providing soldering kits for everyone to sit down and work. The hacker and maker scene is becoming increasingly popular and this provides a lot of exciting opportunities for companies like CadSoft.”

Thomas Liratsch, Managing Director of CadSoft, explained what it meant to take part: “PCB design plays an important role in the hacker and makers environment.  Our EAGLE software is the perfect tool for hobbyists due to its ease of use and cost effectiveness. For many years we have identified the maker scene as one of our key target groups – a long time before ‘makers’ were even becoming trendy. This year, we had an experienced expert at OHM 2013 who led a series of lectures and workshops to demonstrate the software’s capability. This was a fantastic opportunity to share our product with so many people.”

https://ohm2013.org/site/

hackerscouts.png

Hacker Scouts hope to retain their logo. (via Hacker Scouts)

 

Trademark and copyright infringement can be a serious offense punishable by hefty fines and even jail time. Just ask anyone convicted of these offensives (including file sharing and torrenting) and they’ll tell you that most instances are not worth the hassle. One of the more recent incidents involving copyright infringement came from the world’s leading intelligence agency the NSA, who has threatened lawsuits against websites (notably Zazzle.com) and persons who sell humorous t-shirts depicting the agency with unflattering quotes. The problem is not the quotes themselves but rather using the agency’s logo, which has been copyrighted. It’s understandable that a highly secretive intelligence agency would go through the motions of filing lawsuits against those that would ‘steal’ from them (yes, pun intended), but what about organizations that cater towards children by installing a sense of honor and morality through education? That’s the case between the Boy Scouts of America and a non-profit Oakland-based start-up known as Hacker Scouts. Don’t let the name full you, Hacker Scouts doesn’t train kids to break into sensitive computers or create devastating viruses aimed at taking down financial institutions but rather provides a learning platform that gets kids into engineering, development, art, sciences or math in an effort to educate them for the future of technology.


The problem the BSA has in this case is a two-sided coin with one issue being the use of the word Scout and the other being a similar mission statement between the two organizations. The Boy Scouts obtained a Congressional Charter back in 1920, which provides a law given to the group that outlines the organization’s mission statement, authority and its activities, not to mention it solidifies the word Scout and Scouting (meaning relative to that group) to the organization itself. Those that use the words in their logo without obtaining permission from the BSA can be legally charged with copyright and trademark infringement and while they routinely sell that name to other businesses and organizations, they have no mercy for those that simply take the words without paying. The second issue the BSA has with Hacker Scouts is their mission statement, which says the Hacker Scouts model provides knowledge, application and retention of concepts and skills while supporting independence and interest. More specifically, it’s the last line of the group’s outline that the BSA is scrutinizing, which states it instills the development of strong moral character and leadership skills through their core values. In contrast, the BSA’s statement reads ‘to prepare young people to make ethical and moral choices by instilling in them Scout values’. Samantha Cook, head of the Hacker Scouts, states that the their missions statement was developed based upon guilds, which represents an apprentice being taught by an expert in a related field and the Boy Scouts never entered the minds of those who began the non-profit organization. Only time will tell if the Scouts can live up to their tradition of instilling strong moral character into their ranks as both groups are centered on giving children opportunities to learn and grow into responsible adults.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

gTar.jpg

The gTar sports a full guitar body with an iPhone dock to communicate with the free gTar app and an LED sensor fitted fretboard to guide users as the play. (via gTar)

 

An engineering example to note. We all can learn from some obvious innovation.

 

iOS devices, old and new, have proven to be highly flexible in their wide range of applications: from a streamlined productivity suite gadget, to a full- fledged musical rocking machine. Incident Technologies, the casual music entertainment company, has taken the latter road with the introduction of its iOS device based guitar learning tool - the gTar!

 

The gTar was originally being developed to assist computer musicians in the production of music with an easy to work with guitar. Incident Tech, now headquartered in San Francisco, grew as a company as the gTar also grew into the learning tool it is today: The company states: “By empowering anybody with the tools to enjoy, create, and perform music naturally and intuitively, we seek to keep music core to the human experience for everyone.” The device works much like a standard guitar: a full-bodied guitar with a neck, fretboard, headstock, bridge, steel strings. However, a few glaring differences will catch the eyes immediately: the front of the body is fitted with an iPhone dock and the fretboard is lined with LED markers for every possible string/fret combination.

 

To use the gTar, players can select songs from the gTar iPhone app to play along with or allow the device to act similar to a real guitar in “Free Play” mode. When playing along with a song, the app sends information to the LED sensors that let users know what they should be playing. Sound is generated digitally (MIDI output) through the gTar app, so a wide range of sounds to strum to are available, such as the “warm synth” and “booming grand piano” medleys. Users can also use the gTar’s auxiliary output to pump sound out of larger speakers, or its USB output to connect the gTar to a computer recording program, such as GarageBand.

 

The Incident group plans on releasing a software development kit for the gTar which should prove promising in expanding its use to a larger spectrum of digital audio apps. Incident Tech’s gTar is currently priced at $450, though early-bird packages are available through their Kickstarter page for a pledge of $399.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

Filter Blog

By date:
By tag: