1 2 3 Previous Next

News

857 Posts authored by: Cabe Atwell

redoem-smi.jpg

SMI RED-oem Remote Eye Tracking platform render. (via SMI)


Law enforcement and federal agencies have been using polygraph machines to detect lies since Cesare Lombroso introduced his blood pressure device back in 1895. Before that? Torture was used as the best method to detect fibs (still is to some extent). Just ask any witch that was present at the Salem Trials and they could probably tell it didn’t work that well. Some analysts will tell you that the eyes are the gateway in detecting if someone is telling the truth or not. They claim the rate a person blinks is a telltale sign of lying as well as not making eye contact or even looking up and to the left or right may be an indication of false pretenses. Some of the early pioneers of computerized polygraph have banded together to form a company, known as Converus, which is developing a new platform that tracks eye movement to detect deception.


The soon-to-be-released EyeDetect device is outfitted with German-based SMI’s (SensoMotoric Instruments) RED-oem Eye Tracking 3D camera system that tracks gaze, eye movement and pupil dilation down to 1/10 of a millimeter. According to Converus, lying causes minute changes in the eye’s behavior because it induces ‘cognitive load’ (psychology- load related to executive control of working memory), which has an effect on eye movement. Think of it like computer RAM that holds on to pieces of data before being replaced by different programs. EyeDetect captures that ocular data and analyses it to assess the ‘likelihood’ of deception while ‘suspects’ answer a series of true or false questions. The company claims the system has an accuracy rate of 85%, which is pretty high in terms of reliability but most courts in the US still don’t allow polygraph tests submitted as evidence. Converus is set to launch their device in April of this year, with Mexico as its first test subject. Businesses will use it for pre-employment screening as well as using it for random testing on employees to weed-out those individuals that accept bribes or are involved in other nefarious activities (there goes police officers and government officials).


C

See more news at:

http://twitter.com/Cabe_Atwell


300x250_ban.png

eggcellent.jpg

Menu projection at Eggcellent (via Advanced Technology Labs)


Japan is the land of technology and this week that innovation has made its way to the restaurant industry, which, according to the Recruit Advanced Technology Lab, is trying to redefine customer service.

 

A restaurant in Tokyo called Eggcellent (which, of course, specializes in all things egg) is expected to be one of the first restaurants in the world to feature an almost human-free dining experience. The facility is expected to incorporate smart glasses, gesture interfaces, customer face identification, completely wireless payments, avatars, augmented reality, iPad-based food ordering and tracking and more.

 

Is it too good to be true? Well, probably. Recruit said the infrastructure is based on iBeacon, allowing a customer to enter the restaurant, view the menu, get food recommendations, order, wait for their food, select what they’d like to watch on TV and pay – all without interacting with a human wait staff.

There are a few other unique tech features offered at the café that attempt to solve the drawbacks of fine dining. For one thing, when a customer sits down at a booth, the iPad at the table will wirelessly sync with their social media accounts, giving them their friends’ favorite dishes at that establishment. With this, the software also keeps a running toll of the order in which each customer’s dish come out of the kitchen, so no one wonders when their food is going to make it to their table.

 

The dining technology features a Kinect sensor, PC assists, Wii remote, projector and microphone to give users an interactive experience. TV screens in the restaurant are equipped with the technologies, enabling customers to change the station, order food from a virtual server and more, all wirelessly. While there was no talk of incorporating gaming into the restaurant, the possibility certainly isn’t far off.

 

A virtual dining experience may sound like something off in the distant future, but incorporating the technology into the everyday dining experience makes a lot of sense. For one thing, the technology pays for itself because restaurants can downsize their wait staff. Secondly, the customer will never have to wait for a busy waiter to take their order, or wonder when their food will arrive. There is certainly a very relevant place for human waiters in the restaurant industry, but incorporating both man and machine can produce the optimum dining experience of the future.


 

C

See more news at:

http://twitter.com/Cabe_Atwell

Researchers at the Massachusetts Institute of Technology (MIT) have taken household plants and paired them with nanomaterials to create bionic plants that can do everything from monitoring environmental pollutants to detecting chemical weapons.

 

Researchers Michael Strano, a Carbon P. Dubbs Professor of Chemical Engineering and Juan Pablo Giraldo, plant biologist, worked together to harness plant power as a new kind of technology platform. The research team chose plants because of their ability to repair themselves, survive harsh outdoor environments and be self-reliant for power and distribution of water. The emerging field is nicknamed “plant nanobiotics” and combines plant biology with chemical engineering nanotechnology to create ‘super plants.’ The potential for plant nanobiotics is relatively untapped and the possibilities are endless. For this reason, Strano and Giraldo set out to discover what an average plant can really do.

 

mitplantsensor1.jpg

Professor Michael Strano (left) and postdoc fellow Juan Giraldo (right) in lab at MIT (courtesy of Bryce Vickmark of MIT)

 

The Process

 

To create bionic plants, researchers rely on embedding cholorplasts with cerium oxide nanoparticles, or nanoceria. Nanoceria is delivered to the plan through lipid exchange envelope penetration, which allows the substance to penetrate the protective membrane of the chloroplasts, without damaging molecules.

 

Through this process, researchers began installing semiconducting carbon nanotubes, covered in negatively-charged DNA, into the choloroplasts as well. This had a positive effect on the plant’s ability to absorb light, including the absorption of light wavelengths that are typically not within a plant’s range, such as near-infrared, ultraviolet and green light. Through this process, plants exhibited a 49 percent increase in photosynthetic activity.

 

The researchers then used vascular infusion to inject nanoparticles into the plants through nanotubes, making the plants “bionic.” While researchers are still unsure of how the process affects the plant’s glucose production, they were able to create a variety of plants with potentially practical uses in the field of biochemistry.

 

mitplantsensor2.jpg

Researchers using near-infrared microscope to detect output of carbon nanotube sensor in Arabidopsis Thaliana plant (Courtesy of Bryce Vickmark of MIT)

 

Practical Uses

 

The research team used the Arabidopsis Thaliana plant in its study as a plant model and installed a carbon nanotube, designed to detect the presence of a common environmental pollutant, nitric oxide, which is produced through combustion. In the experiment, the team successfully gave the plant supernatural properties and when presented with the toxin, its luminescence changed, telling the researchers that it indeed detected the toxin’s presence.

Giraldo and Stano created a number of nanotubes that could sense various chemicals, including the explosive TNT, chemical agent nerve gas sarin and hydrogen peroxide. The target molecule is detected when it encounters the polymer encasing the nanotube. When it is detected, the florescence of the plant changes, revealing the presence of the threat.

 

The team has plans to enhance its carbon nanotubes to create an army of plants that can detect various biochemical threats in real time, at very low concentrations. The team is also working on developing bionic plants that rely on electronic nanomaterials, such as graphene.

 

Giraldo said the field of plant nanobiotics is still in the developing stages. He considers it a great opportunity for the communities of plant biologists and chemical engineering nanotechnologists to work together towards a world of technology powered by plants.


C

See more news at:

http://twitter.com/Cabe_Atwell

windowsCortana.jpg

Cortana in action (via WPC)


One of the biggest features of Apple's mobile OS is its voice recognition tool, Siri. Siri allows users to ask questions such as “are there any taco restaurants around” and will process the question and give the user an appropriate answer - we know the deal. Most impressively, Siri is good at understanding people's everyday language. Users can speak how they normally would to another person and Siri can interpret the meaning correctly or will ask questions to get a better understanding of what was asked. This is a demonstration of the recent technological advances in the area of speech recognition.

 

Since Siri has been so successful it is only natural for Apple's competitors to come with their own versions of personal assistants for their mobile products. The latest versions of Android software now comes equipped with Google Now, Google's very own intelligent personal assistant. Google Now comes as part of the Google Search application and can learn and adapt over time to a user's search habits. Now, Microsoft is set to soon join the voice recognition competition by releasing its  own personal assistant, currently known as Cortana.

 

The name Cortana stems from the popular Halo series on Microsoft's Xbox consoles. Cortana is a fictional AI character who feeds information to the main character. With Window's newest phone update Cortana will take the form of an animated circular icon which will change shape when speaking or processing information. Additionally, much like Google Now, Cortana will be capable of keeping track of user's previous search requests and use this information to better process requests in the future. Cortana will also attempt to learn a user's schedule and goals. Using this information it will then find and offer useful information to the user.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

S7 smart tv image 1.jpgS7 smart tv image 2.jpg

Set Top and Smart TV PR images... Keyboard is quite welcome! (via Hisense)


Android fans unite – Hisense recently announced the launch of its new Smart TV powered by Android 4.2.2. The Hisense H6 Smart TV combines Android technology with the innovative TV services now offered by Google (formerly known as Google TV). The new toy offers upgraded Easy View capabilities and a remote-controlled air mouse for easy-of-use. The new TV is available in 40-inch, 50-inch and 55-inch models, but for anyone that wants the technology without investing in a new TV, the Pulse PRO Set Top Box is available.

 

The new H6 Smart TV is supported by Marvell’s ARMADA 1500 Plus (88DE3108) HD Media processor and has both 3D and internet capabilities. Those ready to invest in a new TV will enjoy 3D, Netflix, YouTube, Pandora and Vudu HD capabilities with 1080p resolution and a 120Hz refresh rate. The H6 Smart TV also received an Energy Star 6.0 qualification and comes with a wireless, voice-controlled air mouse remote.

 

Those that want the innovative technology of Hisense, Android and Google without purchasing a new TV can enjoy the Pulse PRO Set Top Box. The device connects directly to an existing television and offers almost all of the same features as the H6 Smart TV. Both products are equipped with Hisense’s Social TV App and Cloud Services Hi-Media Player and Receiver. Both devices are also Energy Star 6.0 qualified and feature 1GB RAM and 8GB ROM. The Pulse PRO Set Top Box will also include the H6 wireless controller that features IQQI Smart Input Technology with just 30 keys.

 

s7 smart tv image 4.jpg

One "smart" remote... always the dedicated Netflix button...

 

Aside from being equipped with Netflix, Vudu HD, YouTube and Pandora, the Pulse PRO Set Top Box and the H6 Smart TV will also feature Amazon Instant Video, Chrome, Google Play, Prime Time, Android-powered TV v4 Media Streaming, Google Voice Search, Marvell BG2-CT board with 4G Flash and 1G RAM, Wi-FI, Ethernet connectivity, Bluetooth capability, HDMI-In/Out, IR-In/Out, DLNA, USB and a remote with Motion and MIC Sensors.

 

Both Products are currently available on the market. The Hisense H6 Smart TV ranges in price up to $1,099.99, depending on the size of the screen. The Hisense Pulse PRO Set Top Box is available for $199.99.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

8ca4762b847fdff98c389a5da59d6c39_large.png

Russell Rubman’s Gittler Guitar. (via kickstarter & Russell Rubman)


Guitars, or stringed instruments, haven’t changed much over the few thousand years since their inception. (A 3,300-year-old stone carving of a Hittite bard playing a stringed instrument is the oldest iconographic representation of a chordophone. Wiki) They still feature a body of some sort, neck, fretboard and headstock that is usually outfitted with six or more strings. On acoustic guitars, it’s the hollow body that produces sound when the strings are strummed, however on electric models, the sound is produced through electronic pickups that channel the sound to an external speaker or amplifier. Various designs have been produced over the years to give both kinds a distinctive look, however they still feature the traditional parts. Back in the 70’s, musician Allen Gittler looked to minimize his guitar’s makeup, stripping away all unnecessary parts but still retaining the instrument’s basic function without the loss or handicapping of its sound. His resulting design did away with the guitar’s body and headstock but retained the frets (situated on a single rod), strings and small strumming area.

 

Taking a page from Allen’s minimalistic design, Russell Rubman has taken that layout and given it a 21st century makeover. His Gittler Guitar still maintains a minimalistic design but is manufactured with aircraft-grade titanium, outfitted with 31 cylindrical frets (complete with LED lighting) and six string tuners positioned on the bottom of the instrument. Sound is captured using magnetically isolated transducers that send a signal to any MIDI interface or computer and then piped to an amplifier. The bottom also features an ‘E-Box’ that has both tone and volume controls to alter the signal to produce different sounds, much like an electric guitar. Russell is currently funding his Gittler Guitar on Kickstarter in an effort to acquire backing to manufacture his futuristic remake. Those looking to get their hands on one can pledge $2,000 or more with delivery just in time for the holidays (estimated delivery by December of this year).

 

C

See more news at:

http://twitter.com/Cabe_Atwell

3X5A6718-M.jpg

University of Buffalo’s 40Lb sensor system. (via UofB and BBC)

 

Wi-Fi signals can be found almost anywhere: in large cities, rural towns and even in the mountains (next to ski resorts and ranger outposts). You can even find them on the ocean, especially on cruise ships, however you will not find them under the ocean. Even though radio waves can penetrate water to a certain degree they have very little range (unless you have access to the US Navy’s ELF frequencies), which ultimately negates watching Netflix at 20-fathoms. A research team from the University of Buffalo is developing a way to overcome the problems surrounding spotty Wi-Fi service found beneath the waves. Actually, the team is hoping to create an underwater internet network for the purposes of improved tsunami detection, submerged oil and natural gas exploration, military surveillance, pollution monitoring and other applications.

 

Instead of submerging Wi-Fi devices encased in waterproof enclosures, the system will work similar to tsunami-detection networks. They work by using sensors on the ocean that send SONAR-based data to buoys on the surface, which then send out that data using a radio-based signal. The application doesn’t rely on the technology itself in creating an underwater network but rather relies on the different collection methods used by various companies and organizations. It’s also in that regard that the researchers are aiming to create a shared standard that would allow communications to be used by anyone. To find if their system could be feasible, the team dropped two 40Lb. weighted sensors into Lake Erie. They then typed a command into a Wi-Fi enabled laptop and sent the command to the sensors, which was then successfully ‘pinged’ back to the laptop after bouncing off a nearby concrete wall. As to when their system could be implemented into existing or newly designed submerged systems is unknown, however their platform is ‘sound’ and could be installed sometime in the near future.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

250px-Ibm_pc_5150.jpg

IBM 5150 PC... (via Wiki)


The late 70’s saw the birth of personal computers, which at the time were being developed in garages by home-brew enthusiasts (Bill Gates and Steve Jobs were among those enthusiasts). Not soon after, companies such as RadioShack, Commodore International and Apple were successfully selling their own affordable take on desktop PCs to both companies and individual consumers. When 1980 rolled around, product test engineer (at the time) for IBM William Lowe came up with an idea to get the company into the burgeoning personal computing market (it was, however, the leading provider of corporate mainframes at the time). He believed it was possible to conceive, engineer and manufacture a personal computer within oneyear’s time, which was unheard of back then. The company took a chance on Williams’s idea and he went forth and compiled an engineering team, known as the ‘Dirty Dozen’, to make the new project a reality. Instead of designing proprietary technology and software, William and his team looked to the fledgling Silicon Valley companies for off the shelf parts.


william-c-lowe.jpg

William C. Lowe (via wiki)

 

A year later, the iconic beige-box was born, which was a surprise to those in the industry, including IBM. Known as the IBM 5150 Personal Computer, it featured an Intel 8088 (clocked @ 4.77MHz) processor, 16 to 256kb of memory running Microsoft’s DOS 1.0 operating system. The set-up cost consumers a mere $1,565, however that was without a monitor or even disk drives, although they were available in different configurations of the 5150. After the PC’s release, Apple took note of IBM’s first offering and actually placed a whole page ad in the Wall Street Journal stating ‘Welcome IBM. Seriously’ as something of a blasé taunt. Microsoft founder Bill Gates was apparently at Apple’s headquarters at the time of IBM’s unveiling and later stated that it took Apple a full year to realize what had just happened. The 5150 was launched in August of 1981 and by October of that same year, droves of people were dropping $1,000 deposits just to get their hands on one. By the end of the following year (1982), the company was selling roughly one PC a minute per business day (9 to 5). The 5150 was so popular it became known as the ‘PC’, which is still widely used today. Thanks to William’s efforts and insight, the desktop PC is still going strong today and can be found most anywhere on the planet! Unfortunately for us, we have lost yet another entrepreneur of the technology most of us take for granted. William C. Lowe, 72 passed away on October 19, 2013 of a heart attack. He is survived by his wife Cristina, his 4 children and 10 grandchildren.

 

William C. Lowe

January 15, 1941 – October 19, 2013

 

C

See more news at:

http://twitter.com/Cabe_Atwell

300px-Graphen.jpg

Graphene concept art. Single layer...


Graphene is back in the spotlight and its unusual properties are once again being harnessed, this time for use in sticky-type memory. Yang-Fang Chen and his fellow researchers from the University of Taipei designed the memory for use in flexible electronics. Flexible memory has been created before by researchers from the University of Cambridge using nano-wires grown on plastic substrates, however practical applications using the flexible memory are still a decade or more away. Instead of using plastic, the research team from the University of Taipei used graphene coated in a conductive polymer topped with aluminum electrodes to create a flexible memory sticker. Using graphene as the memory’s substrate gives it the ability of having a natural attraction to other molecules (this is known as the van der Waals force), which allows the memory to be attached just about anywhere.

 

In initial tests, the researchers applied their flexible memory sticker to various surfaces, including a business card and a medical bracelet, which they found did not diminish data retention even while curved. They also found that the memory could be applied, removed and applied again a number of times without losing any stored data. The team stated that with a few more parts (like a Wi-Fi module) attached to their sticky memory, the device could conceivably be used as a flexible flash drive. Think of it like a Post-It note that is able to download data from your computer or mobile device, then is peeled off and stuck to another device to upload that data. The possibilities are endless, the team even thinks that they will be able to incorporate the memory into other flexible electronics sometime in the near future.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

1011_1_02_L.jpg

Toyota’s Advanced Active Safety Research Vehicle. I look forward to computer controlled autos. Mostly to avoid traffic jams (via Toyota)


Anyone who’s watched I, Robot (Will Smith version) can recall the car chase scene with an Audi that’s capable of driving itself and avoiding collisions. While that fictitious car was featured in a science fiction film, there have been several successful autonomous vehicles that are capable of the same feat, although they are not mass-produced. According to a recent press release from Toyota Motor Corporation, the company plans to introduce their advanced driving technology to consumers in only two years. Known as the Automated Highway Driving Assist (AHDA), the technology uses a series of sensors that allow the vehicle to take control and avoid collisions. The system actively looks for vehicles and other obstacles in the car’s path, and if dangerous conditions are detected, the car swings into action by taking control of the vehicle’s brakes and steering to avoid the obstacle.

 

The system does give the driver a chance to react before it is initiated, at which point it brings up a visible notice on a display as well as sounding an alarm. AHDA is actually comprised of two separate technologies, with one known as Cooperative-adaptive Cruise Control, which communicates wirelessly with the surrounding vehicles in order to maintain a safe distance from each other. The second piece of technology uses Toyota’s Lane Trace Control system, which uses millimeter wave radar along with HD cameras to aid in steering control to keep the vehicle in its driving lane. Toyota has already fielded the technology on a limited scope with test vehicles driving on Tokyo’s Shuto Expressway and is set to expand sometime in the next few years.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

ara1blogpost.png

Motorola’s Project Ara. Building your own smartphone seems like a dream. I hope this catches on. (via Motorola)

 

Every smartphone has features some users do not want or could do without. It is in that sense that Motorola has turned to designing a modular smartphone, which would allow its users to connect the hardware they prefer for the applications they use. Known as Project Ara, the idea is to design modular pieces that consist of certain hardware elements such as Wi-Fi, connection ports or even keyboards that connect to one another on a basic platform. The idea for the modular phone came from (the now defunct) Phoneblok project that would allow users to pick and choose their own hardware that fastened to a connectable base. Actually, Motorola teamed up with the creators of Phonebloks for their Project Ara and are currently looking to employ what they call ‘Ara Scouts’ to help design the project’s modular pieces.

 

The possibility of a modular smartphone is certainly incredible. Imagine being able to switch out cameras when a newer version comes along without the need to replace your entire phone or easily replacing a damaged speaker (think the EVO 4G) without the need to get it serviced. Unlike the original Phoneblok design, Ara will use an ‘endoskeleton’ (known as endo) that holds the modules in place, which could include everything from processors, different displays or an extra battery (incredibly convenient). While the prospect of a truly modular phone is only in the planning stages at the moment, Motorola will be releasing a MDK (Mobile Developers Kit) to those who signed up to be an Ara Scout as early as the end of this year. That would mean that a fully modular smartphone could hit the market as early as the middle of next year, however until then we will still have to use our non-modular phones.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

timaxlife ti.jpg

MaxLife concept diagram. Lifetime cycles are increased, but not a big improvement in energy density (via TI)

 

Texas Instruments is one of the most dominant technology companies ever. Behind Intel and Samsung, it is the world's third largest producer of semiconductors. In addition, they are the largest manufacturer of digital signal processors and analog semiconductors. Young students may just know of TI as producers of their world famous graphing calculators. However, for the older, more experienced students, they quickly learn TI has technology that can be found everywhere. In fact, many of the ICs used for basic electronics are all created by TI.

 

There is also one additional area TI's technology excels at. That would be in energy efficient electronics. One of the more popular devices is the MSP 430 microcontroller family. These MCUs allow developers to create embedded applications, which can manage power extremely efficient. The CPU can work with speeds up to 25 MHz or can be lowered to save power in applications. More importantly, the MCU has a low power idle mode. When working in this mode the CPU will draw as little as 1 micro-Amp of current. Along with the low power capabilities, this MCU can also work with all the usual embedded electronics communication protocols and peripherals.

 

As of late, TI has been trying their hand at a new energy saving technology. That would be battery management chips. Back in March, they released their bq2419x family of chips, which were claimed to have the potential to reduce charging times to half their current lengths. This was an extremely demanding technology which many companies wanted a piece of. This is largely due in part to the emergence of tablets and smart phones. All Android users are well aware of the battery draining apps we all so often use. TI is the company looking to provide a solution to ease all of our frustrations.

 

As of recently, TI has announced the release of a few more energy efficient chips. Collectively, they are known as MaxLife chip sets. These include bq27530 and bq27531 fuel gauge circuits, which will be working alongside the bq2416x and bq2419x chargers. Together they are expected to provide faster charging times and increase the longevity of batteries by up to 30 percent. The charger is directly controlled through an autonomous battery management system, which provides users with greater flexibility. For example, due to the autonomous control there is less software overhead to help designers integrate it more easily into systems. Additionally, it provides better thermal management and battery safety.

 

The MaxLife technology from TI is now available in a development kit. The development kit features a bq27531 fuel gauge connected via I2C to a bq24192 charger. Using such combinations charging up to 4.5Amps can be achieved for single cell lithium ion batteries. This is one of the first successful technologies, which will allow batteries to charge faster without damaging the battery. I do not believe it will be long before we see these chips integrated into consumer products.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

castar.jpg

Artist concept of the finished castAR glasses (via castAR kickstarter page)

 

Augmented and virtual reality headwear has risen in popularity ever since Google Glass and the Oculus Rift hit the market. Those glasses and head-mounted displays either overlay interactive applications on the direct or indirect environment being viewed or create a computer-generated environment putting its user in that simulated world. Those two perspectives are usually separate from each other and integrated into a single design rather than both being on the same device. This is usually do to the fact there is too much hardware involved to be packed into a small space, however one company has seemingly managed to incorporate both AR and VR applications into a simple pair of glasses. Just 20 hours after posting their proposition on Kickstarter, Technical Illusions reached their funding goal of $400,000 for their castAR AR & VR system.


Ex Valve employees, Jeri Ellsworth and Rick Johnson, designed the glasses (first developed for Valve’s castAR project) so that users are able to see 3D holographic projections situated right in front of them. This is done using two micro-projectors housed on both sides of the glasses frames, with each projecting a portion of the image on a flat surface. The users eyes are able to focus naturally on the integrated images thereby eliminating the dizziness and nausea experienced using other headsets while gaming (Oculus is notorious for this). The system uses active shutter glasses that have a 120Hz refresh rate, which is necessary to view 3D video and images (higher would be better however). The glasses work by projecting the images (@ 720p resolution) onto their specialized retro-reflective RFID tracking surface that uses IR markers, which are then projected back to the systems glasses with the images/video piped to the lenses through an HDMI connection. A camera (connected via USB) housed in the center of the glasses frames scans the surface for IR LEDs (built into the surface mat) that tracks the users head, which specialized software then adjusts the image depending on the viewed angle. A simple clip-on attachment allows users to convert the glasses from projected AR into true augmented reality (used without the mat) or full virtual reality similar to the Oculus Rift.

 

Another interesting add-on for the castAR system includes the use of their ‘Magic Wand’ controller, which has an IR marker situated on its tip that allows users to use the device as either a gaming joystick of sorts or a 3D input device. It is also outfitted with several buttons, an analog joystick as well as a trigger that allows for additional options with multiple applications. Gaming with the castAR system isn’t limited to just video games, as the RFID mat can be used for board games as well. Users can affix RFID bases to game tokens or miniatures, like those from Dungeons and Dragons or MechWarrior, which can show vital or crucial information about the player on the virtual board. The board can also be created and configured using the company’s castAR software suite, which allows for online gaming as well, so friends can play against one another over an internet connection. Those looking to get their hands on one of the castAR systems can do so through a pledge of $189 and up, which nets you a pair of castAR glasses and a 1-meter X one-meter surface pad. $285 gets users the whole package including glasses, mat, AR & VR clip-on and Magic Wand.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

iPadKid.jpg

I would have loved a tablet when I was his age... (via LAUSD)

 

Earlier this month (October 2013), the Los Angeles Unified School District unveiled their ambitious proposal to introduce a series of tools designed to raise academic standards needed for students to succeed in college or the skills needed for a career. The $1 billion dollar project aims at giving every student (more than 600,000) in the district an iPad tablet in an effort to build 21st century skills revolving around technology to better prepare them for the future. While the initiative is designed to ‘level the playing field’ for both wealthy and underprivileged children (allowing them access to the same opportunities) the roll out of the Project has had a few shortcomings. First, the parents of those children wanted to know why the students were not being taught traditional vocational skills such as machine shop, while others queried about school board politics and priorities rather than anything about the technology or the Project itself.

 

The district’s Technology Project is in conjunction with the nation’s Common Core States Standards Initiative (for children in K through 12), which provides a set standard for mathematics and English language arts. The second issue stems from the tablets themselves (purchased at $678 each), as some schools are not equipped with the Wi-Fi needed to download the educational material in class. These schools will need to be upgraded so the children have access to the related content, which may or may not have been included in the billion-dollar budget. As the first batch of tablets were rolled out to roughly 47,000 students who were allowed to take them home, the district found that some of those students were technology savvy and quickly disabled the tablet’s firewall allowing them to surf the web freely and visit sites deemed inappropriate by school standards (suffice it to say, they weren’t learning math). This presented a liability to the district, as students could become the victims of sexual predators while using school property.

 

The schools quickly remedied that problem by restricting the tablets to in-school use only until it finds a solution around those hacking endeavors. Another issue underlying the iPads is how the district will repair or replace the tablets if they become damaged. Apple has stated that it will replace 5% of those that no longer function, leaving the schools to find their own solution for the other 95%. The problems don’t stop there as L.A. Unified forgot to factor in the necessary training that some teachers would need to use the iPads in their curriculum, as some have never used one or any other tablet for that matter. Other issues such as theft and the inclusion of keyboards for classwork have also become factors that need to be addressed. While the Unified District has slapped a Band-Aid on a few of those cracks, it will need to do a lot more, and soon, before the dam breaches and it takes more than $1-billion to fix those problems. Still, the thinking was in the right direction, as students will undoubtedly need to obtain technology related skills if they’re to succeed in the future.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

First, I am going to suggest a soundtrack for reading this post… Please hit play below and move on:

 

 

Touchscreens are becoming the user interface of the future. It began with smart phones, then came the iPod Touch and then tablets. Now we can find touchscreens in many places. Some restaurants have small touchscreen computers which let diners flip through the menu and check the balance of their current bill. Newer apartment and building complexes are integrating touchscreen directories in the front of the building to get in contact with the person of interest. In addition, many computers and laptops are offering the choice of a touchscreen monitor for an increased user experience. With that said, many new touch technologies are beginning to emerge. So let’s take a look at some of these upcoming innovations and catch a glimpse into the future of the touchscreen interface.

 

ultrahaptics.JPG

(via Ultrahaptics)

 

To begin with we have a new haptic feedback system being developed by researchers from the University of Bristol in the UK. This system integrates a leap motion controller along with ultrasound speakers placed behind a display. While the leap motion controller is in place to track the user’s fingers and hand gestures, the ultrasound speakers provide a sensation to the fingers to give users feedback that will stimulate the sense of touch. The system is called UltraHaptics and consists of an array of 320 of the previously mentioned ultrasound speakers.

 

Tom Carter, a lead researcher involved in the work stated, “What you feel is a vibration. The ultrasound exerts a force on your skin, slightly displacing it. We then turn this on and off at a frequency suited to the receptors in your hand so that you feel the vibration. A 4-Hertz vibration feels like heavy raindrops on your hand. At around 125Hz it feels like you are touching foam and at 250Hz you get a strong buzz.”

 

 

A similar technology being developed by Disney Research is one that involves electrostatic forces to simulate a sense of touch. Rather than using sound waves to compress the skin and provide the feeling of a textured surface, Disney's researchers have been employing electrovibrations which can stretch and compress the skin. The vibrations have been used to create the same type of lateral friction one would experience when sliding their finger across a bump.

 

“Our brain perceives the 3D bump on a surface mostly from information that it receives via skin stretching,” said Ivan Poupyrev, who directs Disney Research, Pittsburgh's Interaction Group. “Therefore, if we can artificially stretch skin on a finger as it slides on the touch screen, the brain will be fooled into thinking an actual physical bump is on a touch screen even though the touch surface is completely smooth.”

 

 

Both of the aforementioned technologies are hoping to enhance the user experience and create more tactile-rich displays. On the other hand, Japanese company, AsukaNet, is developing an Aerial Imaging Plate. Very similar to what a hologram is like, the system would project an image which will appear to be floating in front of a user. The user can then navigate through menus or anything a touchscreen interface may be used for. To accomplish this task a tablet interface is used alongside reflective surfaces which project the image in front of a user at a 45 degree angle. Furthermore, the user must also stand in a specific position in front of the display. If not, the image will appear as a normal flat surfaced picture. The company mentions this feature can be used as an advantage in scenarios in which privacy is important. For instance, when interacting with an ATM this feature could increase the privacy since only the user in front of the display would be able to see which numbers they are poking at. Also the projected display increases the sanitation many interfaces lack. Since there will be no physical contact involved, germs and viruses cannot be transmitted from one person to another. 

 

eyetoy.jpg

Printed toy with a little optical imaging sensor built in... (via Disney Research)

 

The last of the new and innovating touch technologies we can expect to be seeing in the future is curved touch screens. One of these comes from Disney Research and is made during the process of 3D printing. Thanks to a light sensitive plastic, known as photopolymer, optical fibers can be printed alongside the main structure of a 3D object. What this allows the engineers to do is connect the optical fibers to an image source which can then transmit the information to and from a curved surface. Disney has already created many prototypes, many of them cartoon-shaped creatures with large eyes that look around the room.

 

lgflex.jpgLGGFlex.jpg

The bendy screen on the left. Possible image of the actual LG G Flax phone on its way to the market. (via LG)

 

LG has also announced a flexible display which is set to be released next year. It is a 6 inch flexible OLED screen which LG claims to be indestructible. The display is going to made from aplastic substrate while also consisting of layers of a film-type encapsulation and a protection film. Overall the display is going to be only 0.44mm thick, weigh only 7.2 grams, and possess a vertical concave radius of 700mm.

 

“LG Display is launching a new era of flexible displays for smartphones with its industry leading technology,” said Dr. Sang Deog Yeo, Executive Vice President and Chief Technology Officer of LG Display. “The flexible display market is expected to grow quickly as this technology is expected to expand further into diverse applications including automotive displays, tablets, and wearable devices. Our goal is to take an early lead in the flexible display market by introducing new products with enhanced performance and differentiated designs next year.”

 

LG will not be the only ones in the flexible display market. Samsung has also been showing off some prototypes and experimenting with technologies of their own. We can expect these technologies to hit the market some time late next year. However, CES 2014 is also right around the corner. As the world's largest gathering place of consumer electronics, it is almost certain we will be seeing many more innovative display technologies.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

Filter Blog

By date:
By tag: