Skip navigation
1 2 3 Previous Next

Single-Board Computers

122 posts

Toyota’s Platform 2.1 autonomous vehicle features two modes- Guardian and Chauffeur. (Image credit Toyota)

 

Last month Toyota unveiled its latest version of their autonomous vehicle- the Lexus-modified (600hL) Platform 2.1, which features better sensors, improved detection and a pair of steering wheels for better human control. The vehicle is a base for Toyota Research Institute’s autonomous research systems known as Guardian and Chauffeur, which provides increased navigation and safety aspects. Chauffeur is based on autonomous Level 4 driving (there are six altogether), which makes the vehicle highly autonomous- designed to perform all safety-critical driving functions and monitors all roadway conditions, however it’s limited to the ODD (Operation Design Domain) and doesn’t cover all driving scenarios.

 

Guardian mode acts as it sounds- a driver-assist system that monitors the environment around the vehicle and alerting drivers when a hazard appears and assists with crash avoidance when necessary. Moreover, it monitors the driver’s eyes using an infrared sensor mounted on the dashboard to detect drowsiness and distraction and acts accordingly when adverse behavior is encountered.

 

As far as the dual steering wheels are concerned, the passenger side acts as a drive-by-wire system for acceleration and breaking as a method for transferring vehicle control between the human driver and autonomous system. Toyota successfully tested both systems on an enclosed track where they performed admirably, but although it was a great demonstration of proof-of-concept, the track doesn’t provide real-world scenarios, in fact, none of them do, but one of them does come close- California’s GoMentum Station.

 

GoMentum Station is known for tough, realistic driving conditions and offers 2,100 acres of testing ground with 19.6-miles of paved roads. (Image credit Wing via Wikimedia)

 

GoMentum Station (Concord Naval Weapons Station) is known for its tough, realistic conditions where automobile companies can test their autonomous vehicles without worrying about damaging property, as it’s a closed-off area boasting 5,000 acres with 19.6 miles of paved roadways. The Toyota Research Institute will use the grounds to further test their Platform 2.1 autonomous vehicle, which now includes a new LIDAR system for longer sensing ranges, a denser point-cloud for detecting the position of 3D objects and a larger FOV that’s dynamically configurable.

 

GoMentum’s grounds offer varied terrain and real-life infrastructure, complete with roads, bridges, tunnels, parking lots, and intersections- most everything encountered in cities and urban areas. TRI’s vice president of autonomous driving Ryan Eustice feels this enclosed testing area will help further Platform 2.1’s development stating, “The addition of GoMentum Station to TRI’s arsenal of automated vehicle test locations allows us to create hazardous driving scenarios for advancing capabilities of both Guardian and Chauffeur and further develop our technology.” That being said, Toyota still has a long way to go before their autonomous vehicles enter the market but this is an important first step as safety for drivers and pedestrians is paramount.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

MNS robots can self-heal if one becomes damaged or malfunctions. (Image credit Nature Communications)

                                        

All references to ‘Skynet’ aside, researchers from the University Libre de Bruxelles in Brussels have developed robots that can function on their own or come together to form a mergeable nervous system, complete with a single hive-like mind and the ability to heal itself. They also have the ability to arrange themselves into different configurations to perform different tasks and optimal performance in different environments.

 

This was no simple feat as scientists have already developed several different robotic platforms where they can work together but not at a level of efficiency they would like. For instance, some robots can be programmed using information of their surroundings and then organize themselves to solve a problem, which isn’t that easy to do. Another method is to connect a robotic swarm together using a central command, or brain center to control each individual robot, but this method can be disastrous if the central command malfunctions.

 

 

This new system bridges those two methods, calling the new capability a ‘Mergeable Nervous System for Robots,' a sort of modular platform with self-healing capabilities- meaning if one robot becomes damaged or malfunctions it is either repaired by the other robots or discarded completely. When together, control of the robots shifts to a single unit, controlling everything including shape and job function. If that leader bot malfunctions, the leadership position shifts over to another who takes control of the group and continues to carry out that particular job function.

 

The mergeable nervous system concept relies on sensors for sending and receiving messages to perform job functions. (Image credit Nature Communications)

 

Each MNS robot is built in a sandwich-like fashion with the base module outfitted with both wheels and tracks (AKA Treels) and a series of sensors (3D accelerometer, gyroscope, IR, RFID, etc.) used for navigation and movement. Stacked on top of that layer is an inter-robot connection module that allows the robots to connect to one another while another utility module acts as a magnetic coupler, attracting other metal objects. On top of that is the range & bearing communications module, allowing each robot to connect wirelessly while another module acts as a rotating scanner for long-range detection. On top sits the brain that controls it all, which features an ARM 11 processor, Bluetooth, WiFi and a pair of cameras.

 

Cram all of those components together and you get the MNS platform, which uses a Linux-based OS to get them up and running. Of course, these robots are still a work in progress- meaning you won’t see them working on job sites or in hazardous environments anytime soon but the researchers are hoping to develop robots with flexible appendages that can function in three dimensions using the same technology.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

Google introduces its own AR software development kit dubbed ARCore. ARCore doesn’t require any special equipment; you only need your phone. (Image via Google)

 

Not too long ago, Apple stepped up their VR game with ARKit, their first attempt at an augmented reality platform. A tool that allows developers to create AR apps easily, it’s already yielded some impressive results like a virtual pet game and restaurant app that displays food on a plate. Not to be outdone, Google stepped up and announced their own augmented reality platform called ARCore.

 

Similar to Apple’s platform, ARCore is a software development kit (SDK) that brings AR capabilities to current and future Android phones. Unlike Google’s other AR project Tango, no special hardware is required – all you need is your phone to start experimenting. You can access the software right now if you have Pixel or Samsung Galaxy 8 – both have to be running Android 7.0 Nougat or higher. That’s pretty limited, but in the future, the company hopes to have ARCore running other phones by LG, ASUS, and others.

 

ARCore works with Java/OpenGL, Unity and Unreal, similar to Apple’s software, and specializes on three features: motion tracking by using your phone’s camera, environmental understanding to detect horizontal surfaces, and light estimation to make sure the lighting and shadow of virtual objects match your surroundings.

 

Wanting to be on the cusp of emerging technology, Google has already invested in apps and services specializing in AR. Some examples are 3D tools Blocks and Tilt Brush, which makes creating 3D content pretty easy. The company is also working on a Visual Positioning Service (VPS), which will enable world scale AR experiences that go beyond a tabletop. At the same time, they’re releasing prototype browsers to let web developers start playing around with AR. These AR enhanced sites can run on both Android/ARCore and iOS/ARKit.

 

ARCore is just one way to reach the company’s goal of making AR accessible to everyone. And people have wasted no time taking advantage of the software. Google shows off some examples in their new AR Experiments showcase. And so far, the results are impressive. From cartoon blobs that grow from the ground to drawing a little stick figure and making him dance, it’s clear there are lots of possibilities with AR. Hopefully, Apple isn’t too angry that Google seems to be moving in on their territory.

 

Follow this link to their ARCore showcase.

 

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

dims.jpg

Swedish company Einride is looking to employ of fleet of 200 T-Pods by 2020 for a new autonomous delivery service. The T-Pod looks like something out of a sci-fi film (Photo via Einride)

 

We all heard the stories about self-driving cars and their unfortunate accidents, but ever think you’d see a self-driving delivery truck on the road? What kind of accidents do you imagine? One Swedish company is aiming to make this a reality. Einride recently showed off their prototype of T-Pod, an autonomous electric truck. Looking like a cross between a hilariously huge iPad and a kitchen appliance, the vehicle can move 15 standard pallets and can travel 124 miles on one charge. And it can hold a total weight of 20 tons when full. The truck doesn’t have cab space or windows, which explains its odd futuristic design.

 

For navigation, the truck employs a hybrid driverless system. T-Pod can steer itself while on the highway, but while on main roads a human will operate the vehicle remotely. There will also be people on hand to control several trucks at once when they’re on the highway if such a situation arizes.

Right now, T-Pod is still in its early prototype stage, but Einride is working on having the first truck completed by the fall. The company has bigger goals for the future. It wants to have a fleet of 200 T-Pods traveling between Gothenburg and Helsingborg. So far, Einride claims to have filled 60 percent of the 200 T-Pods they plan to build. Along with this, they’re also working on charging stations to power the trucks.

 

Einride isn’t the only company looking into autonomous delivery trucks. Waymo, Uber, and Daimler are also looking into similar technology. Volvo, also from Sweden, has been investing in self-driving technology for its own trucks. Last month, the company revealed their self-steering trucks that help out sugarcane framers improve their crop yield. While this vehicle isn’t totally autonomous, there’s still a driver inside, it drives alongside the harvester to deliver the crops off-site with minimal damage to the crop itself. Volvo ran similar tests in the mining industry and for garbage collection.

 

Even though these trucks sound cool, they should still be approached with some apprehension. Do we really want self-driving trucks delivering loads of cargo? What if something goes wrong? Hopefully, these companies are taking this into consideration. But chances are we won’t see anything like this in the States. We’re already getting a lot of push back just getting delivery bots on the sidewalk, so the idea of self-driving trucks on our highways probably won’t make it either.

 

I didn’t mention it in the review, but the movie “Logan” featured self-driving trucks that didn’t care one bit about pedestrians on the road. Perhaps an early prediction?

 

Either way, this is great inspiration for the IoT on Wheel Design Challenge.

 

http://twitter.com/Cabe_Atwell

ro amp-cortex-sorting robot.jpg

Amp Cortex Robotic Sorter. (Image credit Amp Robotics)

 

Let’s face it, we live in a world of ‘plastic Armageddon’ where an estimated 300-million tons of it are produced each year according to the World Watch Institute, and a lot of it is buried in landfills or finds its way to the oceans, culminating in giant floating islands. Surely recycling must be a surefire solution to help curb this environmental scourge but we’d be wrong, as a 2012 report from the EPA states that only about 14% out of those many tons are recycled globally, meaning we’ve dropped the ball in our efforts to go green at least on the plastic side of things.

 

 

That’s ok though, as robots seem more than willing to pick up that ball and help save us from ourselves as some municipal waste facilities are turning towards incorporating mechanical recyclers to help curb our plastic problem. Case in point- Denver-based AMP Robotics (Autonomous Manipulation and Perception) has recently teamed up with the Carton Council, AMP Robotics, and Colorado-based Alpine Waste & Recycling to make recycling more efficient.

 

ro suction-for-sorting.jpg     

(Image credit AMP Robotics)

 

The new pilot program teaches robots, such as AMP’s Cortex (pictured above) how to recycle with more gusto and intelligence by outfitting decades-old sorting machines with ‘eyes’ and giving them an AI brain. According to the AMP press release, “This new robotic system, AMP’s Cortex, learns to identify and then grab food and beverage cartons using the latest technology in robotics and artificial intelligence.”

 

The robot is outfitted with specially designed grips to pick up and separate various-sized food and beverage cartons at super-human speed, which is then sold to different institutions who turn them into new products and building materials. What’s more, what the robot learns as it goes can then be ported over to other recycling facilities and injected into their robotic platforms without starting from square one, and it doesn’t end with paper or plastic refuse, it can be taught to separate just about anything, including e-waste.

 

The platform works by using a visible-light camera system to spot the material to be separated passing below on a moving conveyor and then uses its suction cup appendage to grab said material and separate it from the rest of the refuse. It can sort at a rate of 60 items per minute with an accuracy of 90%, far faster than the average human can.

 

ro sadako-wall-b-robot.jpg  

Sadako Wall-B sorting robot. (Image credit Sadako Technologies)

 

AMP’s Cortex isn’t the only trash-sorting robot deployed to recycling centers as Sadako Technologies installed a similar robot, known as Wall-B at Barcelona’s Ecoparc 4 Waste Treatment Centre back in 2015. In this case, the robot’s primary function is to separate valuable materials from common waste and uses a similar system like Cortex, relying on computer vision and AI to learn what materials need to be separated while a robotic arm plucks it out of a lineup of trash moving on a conveyor belt underneath.

 

 

Of course, separating plastics is a great start, all plastics aren’t the same. We will still end up with unrecyclable plastics (those with polycarbonates) squeezed into already full landfills or chucked into the world’s oceans releasing brain-altering BPAs as they slowly decompose.

 

There is hope on that front as well, as scientists are developing ways to keep those BPAs at bay (by not being released during decomposition) using a type of thermoplastic polymers known as polysulfones. These new polymers are as tough as those plastics with BPA, but they won’t leach the chemical through high heat or degradation.

 

Regardless, these new developments are not a stopgap solution or a quick fix in ridding ourselves of the plastic menace but that being said, it is a promising start in a long-game endeavor that may indeed help us sometime in the future.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

4213E55800000578-4670544-image-m-20_1499332211473.jpg

This Promobot saves a little girl from a falling shelf, but many question its authenticity Is this video too good to be true? (Photo from Promobot)

 

Robots have a reputation for being cold, emotionless hunks of metal no matter how badly Hollywood tries to convince us otherwise. But is it possible that we’ve assumed the worst about these bots? A new story seems to think so. A video made the rounds online of a robot rushing to save a little girl from being crushed under a falling shelf. The heroic bot comes from the makers of Promobot.

 

They say the machine entered into “mirror mode” on its own and was able to save her life by copying a human action.

 

The clip, captured in the lobby of Perm Polytechnic University in the central Russian city of Perm, it shows the robot sitting in the lobby, minding its own business. In steps a little girl who starts climbing up shelves filled with boxes. As the shelf is about to topple over, the robot raises its hand and stops it from falling on the little girl. It sounds too good to be true and many think it is. Though the video is convincing many are wondering if it’s staged, especially since the company has had false claims lobbied against them before. Apparently, the same robot has tried to escape its testing facilities twice and once started responding to questions in a profound manner. This is starting to sound like an Ex Machina sequel.

 

So far nobody has proved the video is fake, but one of the biggest red herrings is that the story is coming from the creators, rather than outside sources. If that wasn’t fishy enough, take notice of the safety rope that’s conveniently moved out of the way, allowing the little girl to run over to the dangerous area. Keen viewers will also notice how the boxes seem empty when looking at how they fall and how the shelves themselves seem to be out of place. They’re free-standing rather than being a part of a larger storage space. Also, no one seems alarmed by the accident, and the girl herself doesn’t seem that worried about it. After it happens, she just wanders off to join a parent offscreen.

 

Manager of Promobot, Oleg Kivokurtsev said when the incident happened, they were in the middle of a graduation ceremony, which the bot was meant to open. According to Kivokurtsev, the plan was to have the robot “congratulate the graduate students and remind them that future is for robots.”

 

While the story is definitely noteworthy and grabs your attention, there are too many red flags to take it at face value. While we’d like to think robots would be adept enough to save us in an emergency, chances are this story is a fake. Just remember, be careful what you read on the internet.

 

I'd like to think this is the real deal.

 

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

 

 

 



starship-london-2.jpg

Estonia and a number of other countries are ready to embrace delivery bots; other places not so much. Will these adorable bots be rolling down a street near you? (Photo via Starship Technologies)

 

It was only a few years ago, where the thought of having autonomous delivery drones made us panic about the robot take over. Now, more people are getting used to the idea, including government officials. Recently, state legislatures in Virginia and Idaho granted permission to Starship Technologies’ small delivery bots to operate on sidewalks. Even Estonia is getting in on the action. The country passed the measure 86 to 0 in parliament this month making them the first country in the EU to accept these tiny bots.

 

But before these little guys start roaming the streets, there are stipulations they have to abide by. The robots have to remain small; it can’t be taller than one meter, longer than 1.2 meters, or weigh more than 50 kilograms. In addition, the robots have to be white, equipped with red rear reflectors and lights to make them easy to spot at night. For Starship Technologies, this shouldn’t be an issue seeing as their robots seem to perfectly fit this criteria.

 

Though more governments are open to the idea of delivery drones, there are still plenty who see it as a red flag. Norman Yee, San Francisco’s city supervisor, proposed legislation to keep these bots off the streets seeing it as a potential threat to public safety. The city already bans bikes and skateboards from sidewalks, so preventing a bot from rolling down the street is no surprise. Officials are worried these bots will take over the sidewalk and make it difficult for kids, seniors, and people with disabilities to get out of the way leading to some nasty accidents.

 

Also, there are still several things to consider when bringing up the issue, like security. Though some robots come equipped with an alarm system and camera to stop thieves, it won’t stop people who want to try and take them or even destroy them. And, of course, there may even be hackers who want to crack into the system. Not only that, if larger companies want to take advantage of the technology, they require a larger fleet. Where and how would the bots be stored? These are all issues cities are considering before offering their approval.

 

Despite the downsides, the delivery robots keep coming. Marble and Yelp are already employing the little bots for food deliveries, and Dispatch is relying on them for home deliveries. And giants Amazon and Google are looking to implement delivery drones as one of their many services. Companies like Starship Technologies, are making progress with their implementation, but it will be awhile before everyone allows these bots to roam the sidewalks.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

goplayerbot.jpg

DeepMind’s AlphaGo recently beat Go world champion Ke Jie becoming the first machine to take the title. Ke Jie considers his next move against the Go playing AI (Photo from DeepMind)

 

In the latest battle of man vs. machine, machine came out on top once again. Google’s AlphaGo, an AI program developed by DeepMind to play the game Go, beat Go world champion Ke Jie for the second time giving it the lead in the three-part series. Technically, this means AlphaGo is the world’s best Go player – it has beaten two of the game’s biggest champions in under a year.

 

AlphaGo not only played the game but analyzed its opponent’s movements. According to the AI, Ke played “perfectly” for the first 50 turns, but as the game continued, the AI changed its strategy to beat Ke forcing the player to resign. DeepMind CEO Demis Hassabis said at the press conference that the first 100 moves was the closest they’ve ever seen someone play against the AI. After winning this match, Alphago is retiring from the competitive game scene.

 

Being the first computer program to defeat a professional Go player, AlphaGo has definitely made history. It first made major headlines in 2015 when it won against three time Go champion Fan Hui. It took the glory with a 5-0 win; not bad for its first game against a professional human player. A year later, it faced off against Lee Sedol, who holds 18 world titles and is considered the greatest player of Go of the past decade. It was this match that earned AlphaGo a 9 dan ranking, the first time a computer Go player has ever earned the title.

 

The game of Go is often considered difficult and sophisticated for computers to win compared to other board games, like chess. Most of its difficulty comes from there being 10 to the power of 170 possible board configurations.

 

So, how did DeepMind come to create the current Go champion? The program uses an advance tree search along with deep neural networks. With this, it looks at the Go board as an input and puts it through various network layers which have millions of connections. From here it decides what the best move is to possibly win the game.

 

To train the program, researchers showed it numerous strong amateur games so it could develop an understanding of what human game play looks like. Once it got the hang of it, it played against different versions of itself thousands of times to help it learn from its mistakes and figure out where to make improvements.

 

AlphaGo may be retiring from gaming, but DeepMind isn’t ready to move on to something else just yet. The company wants to publish a final paper about the development of the AI since its match with Lee Sedol last year. They also want to use the program to help teach others how to play the complicated game.

 

With this victory, I am not worried about machines taking over as I am about machines taking even more jobs! But, that's progress for you.

 

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

I have MicroSom i2ex with fuses configured to boot from SATA. I have added support to u-boot (based on version: 2017.05-rc1) to boot from SATA for HummingBoard Pro board.
Here is repository: u-boot-imx6

 

Compilation:

hg clone https://kkubacki@bitbucket.org/kkubacki/u-boot-imx6//kkubacki@bitbucket.org/kkubacki/u-boot-imx6
export ARCH=arm
export CROSS_COMPILE=/usr/bin/arm-linux-gnueabihf-
make mx6cuboxi_defconfig
make

As result we get two files: SPL and u-boot.img. SPL file should be placed at address 0x00000400 at SATA device.

Copy SPL to SATA device e.g. sda:

dd if=SPL of=/dev/sda bs=512 seek=2
sync

u-boot.img should be placed on first EXT4 partition on SATA device.

About this post

Twenty to thirty minutes guide lines about connecting BeagleBone Black with the ITBP h-nanoGSM shield.

 

Main parts

  • BeagleBone Black
  • h-nanoGSM - GSM shield [nano] or
  • any other ITBP modular modem,
  • 1pcs. one cell LiPo battery [3.7V,  > 250mA], or
  • 1pcs super-capacitor bigger than 1F rated for more than 5V and having ERS lower than 250mOhm[we've tested  SCMT22C505MRBA0SCMT22C505MRBA0 from AVX and PM-5R0H105-1 from POWERSTOR/EATON

 

Knowledge and skills required

  • BeagleBone Black previous experience is quite welcome,
  • some entry level Python and Linux knowledge are required,
  • soldering

 

About BeagleBone Black

Folks, finding BBB was a pleasant surprise for me! This BeagleBone Black it is awesome - best SBC ever used by me! One very good BBB presentation can be seen here.

 

About h-nanoGSM shield

nanoGSM shield - became commercially available in 2016, August. It is a quad band GSM only (world wide compatible) + Bluetooth 3.0 nano shield, packed in an compact format 1.25"x1.16"(31.75x29.46mm) and with weight around 8g. Same as his bigger brothers [c-uGSM -dual-SIM GSM only and d-u3G - 3G/GSM engine), it is not only a break-board, but a full shield having powerful features embedded, as: USB support (communication and powering), auto level 2.8V-5V digital interfaces and Lithium Polymer charger integrated.

 

h-nanoGSM, ITBP modular modem GSM+BTH3.0

 

Some hardware hints

The BBB board it's really a quite powerful engine embedded in a small format. Having 5 UART ports gave us lot of alternatives for interfacing.

BBB pinout

Keep in mind that almost all BBB logic pin have several functions accessible trough the PinMUX [mode0 ==> mode7]. Take a look in following pdfs [very important resources]:
BBB P9 pins table [pdf] and BBB P8 pins table [pdf]

 

In our example, we will use the BBB UART1 for data interfacing with the modem and P9_14[EHRPWM1A], P9_16[EHRPWM1B] and P9_18[I2C1_SDA] as modem control interface [CONTROL].

 

Prepare your h-nanoGSM shield. Solder the pinheader; see how here. Connect the LiPo battery [take care at polarity!] or the super-capacitor, the antenna to the GSM uFL connector. Insert the SIM card [remove PIN code checking before].

 

Hardware connections

In the picture bellow you can observe all the needed connections.

BeagleBone Black GSM wiring

Wiring details revealed, bellow:

BBB GSM wiring datasheet

Above, the h-nanoGSM shield is powered in the "WITH Lithium Polymer" configuration [powered from 5V], but using one 1F super-capacitor instead the LiPo battery [LiPo battery can be used as well].
IMPORTANT:

In our tests, the BBB was powered from the USB. In this case the h-nanoGSM powering was made connecting the BBB SYS_5V with the modem Vin[5V] pin!

Anyway, the BBB recommended powering powering option is via the 5V barrel connector. In this case, we recommend to you to power the modem from the BBB VDD_5V [wire the BBB VDD_5V with the modem Vin[5V] pin].

For other h-nanoGSM shield [or other ITBP modular modems] powering options, read c-uGSM, h-nanoGSM and d-u3G how to start.

 

Let's do the magic [Software]

SOFTWARE ASSUMPTIONS

The real target it is to prepare the BeagleBone Black to be compatible with our RPI support files for h-nanoGSM[code examples] and with the PPP examples, making as little changes as possible.

BBB Debian distribution is used in this how to. We assume you will start from a out of the box BBB [if you used your BBB before, some steps may be skipped].

 

BBB SETUP TASKS

Connect the BBB to the USB. Download the BBB USB driver from http://beagleboard.org/getting-started. Follow the instructions found on that page and install the USB driver. Connect to the BBB using SSH service [initial username is root, no password].

 

DEBIAN SETUP [MAINLY BBB PORTS CONFIGURATION, BUT GATHERING SOME PACKAGES TOO]

a. Connect the BBB ethernet port to your LAN. Check the connectivity to the internet. Enter following commands to the shell:
apt-get update
apt-get install python-serial
and, optional:
apt-get install mc

 

b. Using your preferred editor [I like mcedit, this is the reason I've installed the mc package, but you may use vi, vim...] edit the /etc/rc.local file:

mcedit /etc/rc.local

and insert following lines at the bottom, but before exit 0:

/etc/rc.config-itbp-modem > /dev/null 2>&1

Save.

 

c. Copy the following script as /etc/rc.config-itbp-modem [right click & save as]

Make it executable:

chmod 777 /etc/rc.config-itbp-modem

 

We are almost there...this was the Linux part. Just reboot your BBB [shutdown now -r or reboot will do this job].

 

PYTHON ITBP MODEM AND PPP SUPPORT FILES

a. Keep in mind that Python ITBP modem and PPP support files was written to compatible with the RPI and Debian distribution. There are three major differences when porting the code to the BBB [Debian]:

a1. The serial port names, /dev/ttyAMA0 for the RPI and /dev/ttyO1 for the BBB [we assumed that UART1 will be used]. We will address this later.

a2. The RPI.GPIO python class it is not present and compatible with BBB python. We will address this later.

a3. The BBB port addressing it is different from the RPI port addressing under python.

For all a1, a2 and a3 will apply some simple patches, later.

 

b. Download the h-nanoGSM python and PPP support files from the download page. You will need to register using your name, email address and with the IMEI of your h-nanoGSM [the IMEI can be found on the top of the M66FA chip, or you may find using AT+CIMI command].

 

c. Decompress the archives. This archives contains, along with other files, the following files: "hnanoGSM1_08_hw_control.py", "hnanoGSM_Serial_Lib.py" and "globalParVar.py", related to python modem control and serial communication.

 

d. Copy following file: ITBP_gpioBBB.py  [or, right click & save as] in the very same folder where the "mdmname_ver_hw_control.py" [in this case "hnanoGSM1_08_hw_control.py"] it is located.

 

e. Fixing a1, a2 and a3 differences:


e1. Edit the "hnanoGSM1_08_hw_control.py" file [mcedit hnanoGSM1_08_hw_control.py] and replace the line 17":
import RPi.GPIO as GPIO with:
import ITBP_gpioBBB as GPIO

 

e2. Edit the "hnanoGSM_Serial_Lib.py" file [mcedit hnanoGSM_Serial_Lib.py] and replace the line 40":
agsm = serial.Serial("/dev/ttyAMA0", serialSpeed, timeout=1) with:
agsm = serial.Serial("/dev/ttyO1", serialSpeed, timeout=1)

 

e3. Edit the "globalParVar.py" file [mcedit globalParVar.py] and set the "CONTROL interface for the ITBP modular modems" as bellow:
RESET = "P9_14"
POWER = "P9_16"
STATUS = "P9_18"

TEST THE SETUP

You may run any ITBP modem python example file [Eg. python sendSMS.py].

Ready. Enjoy!

 

VARIANTS. REFERENCES.

You may try the setup using other UART ports as /dev/ttyO2 or /dev/ttyO4, or using other BBB I/O pins for modem CONTROL.

First of all, check the pins are free [config-pin utility can gave you valuable information], but guide after following references:
- https://github.com/cdsteinkuehler/beaglebone-universal-io
- http://www.armhf.com/using-beaglebone-black-gpios/
- http://derekmolloy.ie/gpios-on-the-beaglebone-black-using-device-tree-overlays/
- CAPE, what's about: http://elinux.org/Capemgr
- BBB pins definition: https://github.com/jadonk/bonescript/blob/master/src/bone.js
- and last, but not least http://www.ti.com/product/am3359

Keep calm, understand what's under the BBB blanket and write your own cape [best approach]....and share with us.

 

TUTORIAL & SOFTWARE ARE PROVIDED WITHOUT ANY WARRANTY!!! USE IT AT YOUR OWN RISK!!!!

 

Originally published by Dragos Iosub on itbrainpower.net

kk99

HummingBoard Pro case

Posted by kk99 Apr 6, 2017

I bought case called: EM-RasPI B for my single board computer: HummingBoard Pro. Because this is made from aluminum I have added connector for external Wi-Fi antenna. Here are photos of HummingBoard Pro computer and modified case.

computer

computer

case

case

case

case

case

case

case

case

case

case

paralimb.jpg

A new system that combines BCIs and FES helps paraplegic patients moves their limbs with their brain signals. Bill Kochevar moves his arm and eats independently for the first time in eight years. (via Cleveland FES Center)

 

What did you do this morning? Probably brushed your teeth, enjoyed a cup coffee, and ate a quick breakfast. Nothing special, just part of the norm, right? These are actually luxuries we don’t take into account. Many people, especially paraplegic patients, don’t have these abilities. Rather, they need help with what we consider simple tasks. But researchers may have developed a new tech to help paraplegics with these tasks.

 

For the first time ever, Bill Kochevar, who was paralyzed in a bicycling accident, moved his hand with his mind and was able to feed himself without any aid. He was able to achieve this victory with the help of an implanted brain-computer interface (BCI) system as part of a trial by Case Western Reserve University and the Cleveland Functional Electrical Stimulation (FES) center. The trail, dubbed Braingate is testing the safety and feasibility of BCI implants for paraplegic patients. The goal is to help people with paralysis regain some functionality and help them live an independent life.

 

The BCI is nothing new; scientists have been working on these systems for a long time with limited results. Previously, patients could only control images on a screen with the tech. This new system combines iBCIs with FES, a systems that can stimulate nerves in your limbs to make your fingers move. The tech has been used with paraplegic patients before, but they were only able to activate it with shoulder shrugs or nodding, not with brain signals. Scientists combined the two to achieve the same results, but with increased mobility and activation with brain signals.

 

This is where Kochevar comes in. Researchers had him sit in a special MRI machine and asked him to imagine moving various parts of his body. They then tracked which parts of his brain lit up. From there the collected data was used to implant electrodes at specific spots in Kochevar’s brain and hooked them up to a custom computer interface that deciphers the commands. But he couldn’t step up to the task right away; he had to go through four months of training with a virtual arm to workup his strength. Once he was ready, the FES team remotely handled a 36-electrode array to strengthen Kochevar’s arm and hand muscles. When the BCI and the FES were properly teamed up, Kochevar had the ability to eat, drink, and scratch his nose on his own. He said it feels the same only with a slight delay.

 

There’s no question about the results; it’s a huge step forward in treating paraplegic patients. It also marks what is believed to be the first time researchers used tech to help a patient move on their own with signals from their mind. Though the system works, it’s not quite ready to be introduced to the masses. Right now it’s too bulky and complicated for everyday use. Researchers will work on making the tech smaller so it can be implanted in the body. They also want it to adapt the user’s legs too. There’s clearly still a lot of work to be done, but with Kochevar’s help, researchers are heading in the right direction.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

PUFFER bot.jpg

NASA's PUFFER is a new robot being developed at their Jet Propulsion Laboratory in Pasadena, California, and is designed so that it can fold its wheels and explore terrain that was previously inaccessible to full-size rovers. The PUFFER robot was field tested in snow on a recent trip to Mt. Erebus in Antarctica. (Photo via Dylan Taylor)

 

NASA never fails to impress.

 

NASA recent technological innovation, known as the Pop-Up Flat Folding Explorer Robot (PUFFER), could open an entirely new area of exploration that full-size rovers physically can’t access. The PUFFER project is a Game Changing Development (GCD) program, which is part of NASA’s Space Technology Mission Directorate, and the project is managed by NASA’s Jet Propulsion Laboratory. According to NASA, the GCD program, “...investigates ideas and approaches that could solve significant technological problems and revolutionize future space endeavors.” A company out of Santa Ana, California called Pioneer Circuits helped to integrate a sturdy textile material called Nomex into the body of the PUFFER, and this material has been used in the airbags of the rovers that have landed on Mars, as well as by firefighters to repel heat, and therefore will enable the PUFFER to endure high temperatures.

 

These compact bots can fold inward at the wheels, enabling them to navigate tight spaces, and their size allows them to explore treacherous areas like pits, sand dunes, and slopes as steep as 45-degrees. The origami-inspired foldability allows for compact storage as well, so a rover can efficiently carry many bots that will be autonomous, ideally, and enable the bots to act as a mobile team of scouts collecting more information, and doing so in an effective and efficient manner. NASA reports that the PUFFER is capable of driving about 2,050 feet (625 meters) on a single charge, and this limited range is compensated for by the solar panels on its ventral side, so it just has to flip over and it can recharge in the sun. Given that the NASA team plans to make these bots autonomous instead of remotely controlled via Bluetooth, the bot’s range would be capable of being larger than a remote-controlled apparatus, and the solar panel on the underside of the PUFFER lessens these range restrictions even further.

 

The team at NASA hopes to use the PUFFER in future planetary missions, and already boasts many Mars-compatible materials in its construction including “heritage technology” from the Viking, Pathfinder and Phoenix missions. Also, a company out of Champaign, Illinois called Distant Focus Corporation provided a high-resolution microimager capable of picking up on objects smaller than a grain of sand, and this technology provides a significant benefit to the data collection capabilities. The microimager is one of a variety different technologies that the NASA team wishes to incorporate into the PUFFER. The device is still a prototype, and NASA’s desire to equip it with more and more scientific instruments (which could make it as large as a breadbox) ultimately raises the conflict between size/maneuverability and technological robustness/information-collecting capabilities.

 

The video provided below shows the PUFFER prototype in action.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

Hello,

Does anyone acknowledge the electric circuit below or recognize the protocol type which usually make use of the 48 pin connector ?

 

48-pin to ethernet.jpg

Any information about is welcomed !

 

Thanks in advance,

dims (1).jpg

NASA recently published the third edition of its software catalog free to the public; LEGO announces a Women of NASA play set. The future of LEGO includes women of NASA (Photo via LEGO)

 

In recent years NASA has been shedding its image of a stuffy, restricted institution to becoming the cool space nerd on the block. To be more open to the public, albeit slightly, NASA recently published its 2017-2018 catalog which includes a portfolio of software products used for different technical applications free to the public with no royalty or copyright fees. It’s NASA’s way of offering support to aerospace professionals, students, and small businesses alike. They believe by having access to the software, it could lead to “Tangible benefits” that can “create American jobs, earn revenue, and save lives.”

 

This isn’t the first time NASA has offered its catalog to the public. They published the first edition in 2014 and have been sharing its software programs since. Some of the software included in the catalog have codes for more advanced drones and quiet aircraft. Keep in mind, some codes do have access restrictions, but NASA is working to keep the catalog up to date. You can get the third edition of this catalog in both hard copy and digital editions.

 

In other NASA news, toy giant Lego announced a new spaced inspired set featuring prominent NASA female figures. MIT News editor Maia Weinstock created the Women of NASA set. She submitted the idea to a LEGO competition and earned 10,000 votes needed for the company to consider it. Now, the idea is set to become a reality. While the toy giant is currently working on the set, they revealed the women who will be replicated in brick form:

 

Katherine Johnson – a black physicist and mathematician who manually calculated trajectories and launch windows for early NASA mission, including the Apollo 11 flight to the moon in 1969. Johnson was recently portrayed by Taraji P Henson in the film Hidden Figures. See my review of Hidden Figures, here.

 

Sally Ride – a physics professor who became the first American woman in space and the third woman overall. She still remains the youngest American astronaut to travel space at 32.

 

Margaret Hamilton – a computer scientist who created the on-board flight software used for Apollo missions to the moon.

 

Nancy Grace Roman – one of NASA first female executives who worked on the Hubble telescope and developed NASA’s astronomy research program. Her work has earned her the nickname Mother of Hubble.

 

Mae Jemison – an astronaut who became the first black woman to travel to space and orbit the planet.

 

Weinstein and LEGO hope the set will inspire and encourage young girls interested in the STEM field. It also marks an important part of women’s history that is often overshadowed.

 

Right now there’s no release date or tentative price for the new set. Lego is still working on the final design and hopes to have it out by late 2017/early 2018. How do NASA and LEGO manage to get cooler every day?

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell