Skip navigation
1 2 3 Previous Next

Business of Engineering

154 posts

(Source: Girl Scouts)

 

The Girl Scouts (GSUSA) pride themselves on being able to accomplish anything through learning and instruction. Like the Boy Scouts, these young ladies gain merit badges and awards for performing or being proficient in specific tasks, such as first aid, community service and even philanthropy (among a host of others). They can now add the STEM fields to that ever-growing badge list, through the organization’s first computer science program and Cyber Challenge for middle and high school girls, which is being sponsored by Raytheon.

 

The new program is designed to prepare the students for careers in areas such as cybersecurity, robotics, artificial intelligence and data science. Back in 2012, the Girl Scouts Research Institute fielded a STEM survey and found that an incredible 74% of teen girls are interested in the respective sciences but that interest fades during middle and high school as their exposure to those fields doesn’t inform or support their career decisions during those years.

 

Insert YouTube video here- https://www.youtube.com/watch?v=f_QNc6uYnss

 

The goal of the new program is to get those students interested in the STEM fields and provide them more confidence to maintain that interest when considering a career. According to Raytheon CEO Thomas Kennedy, “At a time when technology is transforming the way we live and work, we can - and should - show young women a clear path to taking an active role in this transformation. Working together, Raytheon and Girl Scouts will help girls build confidence to see themselves as the robotics engineers, data scientists and cybersecurity professionals who will create a better tomorrow.” He also stated the diversification of the STEM workforce needs to be accelerated and he’s right, according to a 2009 Census Bureau report- women comprise 48% of the US workforce, and only 24% of them work in STEM-related jobs.

 

The other part of the Girl Scouts program- Raytheon’s Cyber Challenge, will see partners team up to pit their coding skills against one another, which will be held in 2019, a year after the pilot program is introduced in select cities next year. This isn’t the Girl Scouts first foray into the computer sciences as the organization teamed up last year (2016) with Netflix to promote ‘Helping Girls Become STEM Superstars’- another program to help scouts become interested in the sciences. Additionally, earlier this year the GSUSA introduced new badges for cybersecurity, robotics, and computer science, further enticing young girls to enter into those fields for increased diversity.

 

C

The proposal will reclassify internet service as information service, rendering the Title II regulations obsolete. The faces of its demise. (Image credit FCC)

 

The FCC has released its final draft on the net neutrality issue under the title “Restoring Internet Freedom,” with the proposal expected to pass during the Open Meeting December 14 vote. The draft essentially reverses/repeals the net neutrality protections implemented during the Obama administration back in 2015 by doing away with Title II (common carrier) classification. That classification treated broadband as a utility rather than a service and therefore prevented ISPs from blocking, slowing down (throttling- nearly every ISP does this) or charging more for certain kinds of content and services. To put it bluntly- the order would in effect, deregulate ISPs and nullify any oversight of broadband access.

 

FCC Chairman Ajit Pai has characterized his efforts for repealing net neutrality as an effort to restore and improve broadband services by removing unnecessary regulations. “For almost twenty years, the Internet thrived under the light-touch regulatory approach established by President Clinton and a Republican Congress. This bipartisan framework led the private sector to invest $1.5 trillion building communications networks throughout the United States. And it gave us an Internet economy that became the envy of the world,” stated Pai in a recent press release. A fair statement for sure and that ‘light-handedness’ did indeed see increased investments for a wider broadband infrastructure, too bad nobody’s using it.

 

Regardless, big-name tech companies and a host of other online entities (Google, Microsoft, Amazon, Facebook, etc.) are vehemently against the repeal, saying in most part, it will stifle competition and innovation not only among big businesses but among smaller ones as well (see: declining profits). Communications giants AT&T, Comcast and Verizon on the other hand, argue that current rules have deterred broadband investment and need to be repealed in order for the “US to retain its leading role in shaping and benefiting from the internet,” (see: increased profits).

 

As for consumers, millions (bots maybe?) let the FCC know that deregulation would be a bad idea and would ultimately lead to increased internet bills based on content priority- such as being charged more for visiting specific websites such as Netflix. Think of it like cable television companies who charge more for individual packages- news, sports, entertainment and so forth. No matter if you’re against the repeal, Ajit Pai is defending the FCC plan, explaining- “The FCC will require ISPs to be transparent about their practices so that consumers can buy the service plan that’s best for them and entrepreneurs and other small businesses can have the technical information they need to innovate.”

 

That’s a great idea unless you live in a rural area with only one provider. Moreover, when that repeal most likely passes in December- will we have to Subscribe to everything to know more?

 

This feels like the beginning of a distopian sci-fi story.

C

See more news at:

http://twitter.com/Cabe_Atwell

Digital currency is on the fast track to phase out physical money, which poses several problems that will be costly in the long run. (Image credit Flickr)

 

One word comes to mind when thinking about digital currency ─ convenience. It’s easy to carry, doesn’t wear out and is accepted nearly everywhere. According to a 2015 report from the Federal Reserve, only 32% of consumers in the US used cash for their purchases in 2015, an 8% decrease from 2012 and the numbers continue to decrease. It’s never been easier to pay for things using digital currency- merely press a button, swipe a card or wave a smartphone and the transaction is complete.

 

Writing a check (nearly unheard of now) or unfolding banknotes and sifting through change is fast becoming an archaic form of tendering debts. Even asking friends or relatives for a loan is as simple as getting a PayPal account and even home businesses can take advantage of digital currency using Square or Stripe. So why use cash anymore? Most news outlets will tell you it’s covered in germs and drug residue and most online retailers will tell you there’s no need for physical money, like in the past days of writing checks or getting a money order to ‘buy 12 CDs for the price of 1’ (or 8 CDs for a penny) through Columbia House.

 

Square and Stripe allow users to pay with credit and debit cards through their mobile devices. (Image credit Square)

 

The infrastructure for a cashless economy is already in place- banking can be done online or through an app with no need to ever visit them in person. Nearly every business- retail and restaurants included, accepts everything from debit cards to cryptocurrency and even utilities allow for online payments. So again, the question remains- why use physical cash for anything. There are a few reasons not to transition entirely over to an all-digital currency, the first of which, are man-made and natural disasters. When the electricity goes down, so does the ability to pay with plastic or other digital methods. Great examples of this were seen during hurricanes’ Harvey, Irma, and Maria. The electrical grids in the areas hit by those disasters went down, and people quickly found they could not get goods or supplies using physical cash. In fact, many Puerto Rico citizens are still unable to get the things they need as the electrical grid in most places are still down, and Wi-Fi is nearly non-existent.

 

Puerto Rico’s power grid before and after Hurricane Maria. (Image credit NOAA)

 

Another reason not to go strictly digital is availability- the homeless, underprivileged and even some kids may not have access to a bank account, a requirement (along with an internet connection and perhaps a mobile device) needed in most cases to use digital currency. Sure digital services such as PayPal are free to use, but you still need to couple it with a bank account or other financial institution. For them, and even some of us with digital accounts, the aspect of carrying traditional money represents security, safety, and certainty. Unless the government implements some financial digital standard that makes it easy for everyone to bank through and gain access to digital currency (even in power outages), a strictly all-digital economy may be a disaster in the making.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

 

According to the PONF Project design, the digital camera back will use a digital shutter controlled independently by an Infineon microcontroller - at the date, our ideas are focused on the XMC1100XMC1100 IC - hosting an Infineon TLE94112LETLE94112LE DC motor controller. These two components and the shutter itself will be controlled by the main processing unit (the Raspberry PI Compute Module 3Raspberry PI Compute Module 3) via the I2C protocol. As a matter of fact, the PONF electronic architecture is designed following a modular approach to optimising the single components performances controlled by the main processing unit.

 

Shutter Controller for Testing

This experience started disassembling a Canon 60D electro-mechanic shutter.

 

The first step was isolating the test points on the flat connector of the shutter; checking the signal paths I found the exact correspondence between every test point and the respective pin on of the pins of the shutter connector.

It is expected that the timings and the opening and closing sequences will be operated externally by the camera board; there is no intelligence on the shutter module. It is generically built in three parts: a geared DC motor of about 800-900 mA that can initialize the shutter curtains and two solenoids keeping them in place.

Based on what discussed above, I have connected the shutter through the test points to the Infineon TLC94112LE Arduino Shield; referring to the IC data-sheet this component will provide the features we need to control this component:

 

  • Twelve half-bridge power outputs
  • Very low power consumption in sleep mode
  • 3.3V / 5V compatible inputs with hysteresis
  • All outputs with overload and short circuit protection
  • Independently diagnosable outputs (over-current, open load)
  • Open load diagnostics in ON-state for all high-side and low-side
  • Outputs with selectable open load thresholds (HS1, HS2)
  • 16-bit Standard SPI interface with daisy-chain and in-frame response capability for control and diagnosis
  • Fast diagnosis of the global error flag
  • PWM capable outputs for frequencies 80Hz, 100Hz and 200Hz
  • PWM 8-bit duty cycle resolution
  • Over-temperature pre-warning and protection
  • Over and Under-voltage lockout
  • Cross-current protection

 

The TLC94112LE is conceived to manage both DC motors and solenoids, so I supposed possible to control the two solenoids with the same IC but it did not work. After further tests, I discovered that the solenoids, when the shutter is in use, are triggered sending the control signals to ground. This behavior is incompatible with a direct output logic from the Arduino GPIO so I added a small circuit and changed the wiring design; in addition, I also added a couple of LEDs to see when the two solenoids receive the signal to release the curtains.

 

Control

The TLE94112LE Arduino Shield is connected to the shutter motor through the TLE94112 pin header; Nets OUT1 and OUT2 connect two of the twelve half bridges of the IC to the shutter DC motor.

The power line on the pin header P2 is connected to the shield motor power line; testing the circuit I found empirically that the optimal power supply for the motor should be 7.5 V. Modifying the Arduino software, I have progressively increased the power line with a cheap DC/DC voltage regulator until the TLE94112LE stopped sending an under-voltage error condition.

The two solenoid signals are sent directly by the Arduino to a couple of NPN transistors. To monitor the solenoids status changes I have also added a couple of LEDs.

Based on the above schematics, After a breadboarding test, I have created the PCB layout. It is a preliminary test architecture that obviously will be re-engineered to fit inside the PONF camera final design.

 

Prototype Assembly

The PCB has been engraved with a CNC as shown in the images below. I have also designed a support simulating the optical path of the camera from the lens ring up to the shutter blades plane.

Now the new shutter prototype can be finally tested. After the software development, obviously!

 

Software Design

The software controlling the shutter has been designed to emulate what is expected to happen when it is used by a camera; but a camera is controlled by the user settings, that in our case are replaced by control commands. As you can see in the main sketch ShutterControl_I2C the loop() architecture is extremely simple, limited to process the command sent via the USB serial port.

The sketch includes the motorcontrol.cpp class to control the TLE94112Arduino ShieldTLE94112Arduino Shield hardware and includes a parser we will discuss below.

The last updates of this sources are available on GitHub PONF/ShutterControl_I2C

 

Commands

There is two kind of commands: setting commands to initialize the shutter emulator and control commands to emulate the shutter features.

 

// Shutter commands (all prefixed with 'sh')
#define SH_MOTOR_INIT "shInit" ///< Initialise the shutter motor
#define SH_MOTOR_CYCLE "shMotor" ///< Exectue a shutter motor cycle
#define SH_TOP_LOCK "shToplock" ///< Lock the top shutter frame
#define SH_TOP_UNLOCK "shTopunlock" ///< Unlock the top shutter frame
#define SH_BOTTOM_LOCK "shBottomlock" ///< Lock the bottom shutter frame
#define SH_BOTTOM_UNLOCK "shBottomunlock" ///< Unlock the bottom shutter frame

 

The setting commands defined below are used by the control commands to generate the correct shutter behavior.

 

// Shooting
#define SHOT_8S "8s" ///< 8000 ms = 8 sec
#define SHOT_4S "4s" ///< 4000 ms = 4 sec
#define SHOT_2S "2s" ///< 2000 ms = 2 sec
#define SHOT_1S "1s" ///< 1000 ms = 1 sec
#define SHOT_2 "2" ///< 500 ms = 1/2 sec
#define SHOT_4 "4" ///< 250 ms = 1/4 sec
#define SHOT_8 "8" ///< 125 ms = 1/8 sec
#define SHOT_15 "15" ///< 66 ms = 1/15 sec
#define SHOT_30 "30" ///< 33 ms = 1/30 sec
#define SHOT_60 "60" ///< 16 ms = 1/60 sec
#define SHOT_125 "125" ///< 6 ms = 1/125 sec
#define SHOT_250 "250" ///< 4 ms = 1/250 sec
#define SHOT_400 "400" ///< 2 ms = 1/400 sec
#define SHOT_1000 "1000" ///< 1 ms = 1/1000 sec

 

The control commands set and execute the shutter opening for the desired timing as the camera will do under normal conditions. In addition, to simplify the testing I have also added the four commands

 

#define SHOT_MULTI125 "m125" ///< sequential shots 1/125 sec
#define SHOT_MULTI250 "m250" ///< sequential shots 1/250 sec
#define SHOT_MULTI400 "m400" ///< sequential shots 1/400 sec
#define SHOT_MULTI1000 "m1000" ///< sequential shots 1/1000 sec

 

to simulate continuous shooting with exposure time from 1/125 and 1/1000 sec. The number of cycles is defined in the MULTI_SHOOTING constant

In the preprocessor commands.h file there are commented definitions that are left for future usage and tests.

The MotorControl Class

The MotorControl class is the core of the shutter control; the class contains two kind of API: low-level API that interacts directly with the TLE04112LE IC, and high-level API used by the parser to execute the commands. To work with the DC motor controller the MotorControl class need the TLE94112 Arduino library available on GitHub by Infineon at the following address:

https://github.com/Infineon/DC-Motor-Control-TLE94112EL

 

The Arduino Sketch

The ShutterControl_I2C.ino sketch is the main program of the application. There are two important constants that can be enabled or disabled to change the communication interface.

Defining _SERIALCONTROL constant Arduino will accept control commands from the USB serial while defining _I2CCONTROL constant Arduino will accept commands from another microcontroller or embedded platform through the I2C protocol; this last part is not yet completely working.

 

// ==============================================
 // Main loop
 // ==============================================
 /**  
 * The main loop role is executing the service functions; display update,  
 * checking.
 *  
 * \note tHE i2c Data availability and reading from master is implemented in a
 * callback function. Data reading enable the command parsing immediately then
 * the function come back to the main loop cycle processing the other statuses
 *  
 * \warning The diagnostic check based on the status of the motors running has been
 * removed from the loop as the motors control methods check by themselves the
 * diagnostic status of the TLE when a command involving a motor is executed.
 */
 void loop() {
 #ifdef _SERIALCONTROL
  // -------------------------------------------------------------
  // BLOCK 2 : SERIAL PARSING
  // -------------------------------------------------------------
  // Serial commands parser
  if(Serial.available() > 0){
 parseCommand(Serial.readString());
  } // serial available
 #endif
} // Main loop

 

As mentioned before, the loop() function is extremely simple and manage only the command parser. The sketch includes a series of functions: macros organizing together the MotorControl class APIs

 

//! Initialize the shutter motor
 void initShutterMotor(void) {
  // Enable shutter motor
  motor.currentMotor = SH_MOTOR;
 motor.internalStatus[SH_MOTOR-1].isEnabled = true;
  // Set shutter motor
  motor.currentMotor = SH_MOTOR;
  // PWM Disabled
 motor.setPWM(tle94112.TLE_NOPWM);
  // Passive freewheeling
 motor.setMotorFreeWheeling(MOTOR_FW_ACTIVE);
  // Disable acceleration
 motor.setPWMRamp(RAMP_OFF);
  // Clockwise direction
 motor.setMotorDirection(MOTOR_DIRECTION_CCW);
  // Initializes the shutter windows Both solenoids released
  digitalWrite(SH_TOP, 0);
  digitalWrite(SH_BOTTOM, 0);
 }
 //! Executes a single shutter motor cycle with delay
 void cycleShutterMotorWithDelay(void) {
  // Start-stop 100 ms test
 motor.startMotor(SH_MOTOR);
  delay(SH_MOTOR_MS);
 motor.stopMotor(SH_MOTOR);
 }
 //! Lock/unlock the shutter top window
 void shutterTop(boolean s) {
  if(s)
   digitalWrite(SH_TOP, 1);
  else
   digitalWrite(SH_TOP, 0);
 }
 //! Lock/unlock the shutter bottom window
 void shutterBottom(boolean s) {
  if(s)
 digitalWrite(SH_BOTTOM, 1);
  else
 digitalWrite(SH_BOTTOM, 0);
 }
 //! Shooting sequence
 //!
 //! \param t shooting ms
 void shot(int t) {
  // Lock bottom
  digitalWrite(SH_BOTTOM, 1);
  // Load load shutter
 cycleShutterMotorWithDelay();
  // Shot
  digitalWrite(SH_BOTTOM, 0);
  delay(1);
  digitalWrite(SH_TOP, 1);
 cycleShutterMotorWithDelay();
 #ifdef _SHOTMARK
  digitalWrite(SHOT_MARK, 1);
 #endif
  delay(t);
 #ifdef _SHOTMARK
  digitalWrite(SHOT_MARK, 0);
 #endif
  digitalWrite(SH_TOP, 0);
}

 

The last important function of the sketch is ParseCommand that process the command requests from the serial and executes the corresponding tasks.

 

/** ***********************************************************
 * Parse the command string and echo the executing message or  
 * command unknown error.
 *  
 * The command is removed from the last two characters before
 * effective parsing. Use this function when the command comes
 * from a serial terminal
 *  
 * \param commandString the string coming from the serial+CRLF
 * ***********************************************************
 */
 void parseCommand(String cmdString) {
  int cmdlen;
  String commandString;
  commandString = cmdString;
  cmdlen = cmdString.length() - 2;
 commandString.remove(cmdlen);
 parseNoCRLF(commandString);
 }

 

Previous article

PONF Project: Sony 24Mp Sensor Vs Raspberry PI Compute Module 3

 

Next article

to be continued...

Programming Application Notes

These notes define a scenario illustrating the application strategy we will follow to enable the Compute Module 3 to interface the 24Mp Sony sensor of the PONF camera.

 

Software Sources

All the referred documents and links mentioned and attached to this document are part of the publicly available documentation on the Videocore IV and Broadcom BCM2837 SoC multi core processor, mostly coming from the Broadcom documentation site itself.

The mentioned sources and sources links are available on GitHub released under some GNU Open Source license, without violating any copyrighted information.

The PONF multi camera project is an Open Source one. Of course, due copyright, not every detail will be made publicly available.

 

Videocore IV Data Acquisition

The Raspberry P. I compute Module 3 is based on the Broadcom soc together other components this processor includes the main Cpu and the Videocore IV, the graphic processor.

What are the advantages of this kind of architecture? The main processor runs Linux, but this multitasking operating system can't run real-time. To process images data streaming from a camera image sensor through we need it instead; this is the role of the Videocore IV GPU.

On the quad-core Videocore IV graphic processor run the VTOS, a real-time operating system just to solve this problem. This is the Broadcom BCM2837 SoC scenario in few words. VTOS RTOS use part of the Linux RAM reserved to the video, usually 128 Mb when a Raspberry Pi camera is installed in the system:

 

  • ARM1176JZF-S 700 MHz processor which acts as the "main" processor and typically runs Linux.
  • Dual-core Videocore IV CPU @250MHz with SIMD Parallel Pixel Units (PPU) which runs scalar (integer and float) and vector (integer only) programs. Runs ThreadX OS, and generally coordinates all functional blocks such as video codecs, power management, video out
  • Image Sensor Pipeline (ISP) providing lens shading, statistics and distortion correction
  • QPU units which provide 24 GFLOPS compute performance for coordinate, vertex and pixel shaders. Whilst originally not documented, Broadcom released documentation and source code for the QPU in 2014

 

There is a software mechanism to load the VTOS when the Linux OS boot.

After boot Linux can communicate to the GPU operating system through a series of documented API.

 

How the Camera Sensor Works

The camera sensor operates through two types of communication lines. Control and setting commands are sent through the I2C serial protocol. In the meantime, after sending the I2C commands to set up the image sensor for image acquisition the camera application initializes the VTOS driver to receive real-time the pixel data row by row.

The fast image data stream is granted by the VTOS driver through two differential lines, using the pins of the GPIO supporting this feature.

All of this is possible because Videocore IV includes the Pl camera driver.

 

24Mp Image Sensor Approach

The PONF sensor has higher resolution but works with similar principles; differs in the number of differential channels used to stream the image to the Videocore IV. In our case we need four differential channels, corresponding to a total of eight GPIO pins. According to the Compute Module 3 and Broadcom Soc pinout specifications, the lines we need are available and the GPU is able to manage all the channels through the VTOS real-time OS.

 

References and Available Documentation

A series of public information, documentation and source code is available on the net providing detailed information to drive the design changes to move to the new 24Mp image sensor.

Videocore IV 3D Architecture Reference Guide: Provided by Broadcom includes an in-depth detailed description and punctual reference on how the Videocore works and how it relates to the rest of the Soc. This is a highly valuable resource as it provides both hardware and software reference for the quad-core hardware.

 

Videocore IV programmers manual

Videocore IV Linux Kernels

MIPI Specifications from the MIPI site

QPU Driver from raspberry pi ARM core

 

Most of the useful information and references come from the GitHub Herman Hermitage repository: https://github.com/hermanhermitage/videocoreiv

Another source of important information is the GitHub Raspberry PI Userland: Source code for ARM side libraries for interfacing to Raspberry PI GPU. These sources are a good starting platform to setup the Linux side interfacing applications to the new 24Mp sensor.

(https://github.com/raspberrypi/userland )

 

Changes and Upgrades

Due to the characteristics of the differential channels, the physical wires connections between the Broadcom BCM2837 SoC and the 24Mp image sensor should be of the same length. This is the most important setting to be taken into account when designing the PCB during the circuit routing. The nets wiring the differential GPIO pins should be exactly of the same length, according to the specifications mentioned above.

 

Software Driver

The creation of the specific image sensor driver following the hardware specifications is the key step in the new design approach. The new driver implementation is derived from the current one available in the Raspberry PI Linux Raspbian distribution, modified to support the wider data stream of the new sensor. The low-level driver will run in the VTOS environment.

 

Sensor Control Software

The 24Mp new sensor is controlled via a fast SPl serial protocol instead of I2C; a new series of Linux commands should be implemented via a library exposing a set of API. to control the sensor. The commands will setup the driver, manage the data buffers and the other control parameters. The new command set will have a similar behavior of the already well-known Picamera Linux utilities. As far as we know, according to the Videocore IV software and programming documentation the header files already support in a similar way the commands and parameters to set the image sensor: resolution, image size, frame rate etc.

The Videocore IV methods also interface the hardware features of the Broadcom BCM2837 GPU for some image post-processing features, sensibility settings, RGB values settings etc.

According to the hardware specifications of the Videocore IV exposes post-processing performances to provide a ready-to-use sensor image in a buffer organized in pixel rows. We can compare this approach to a sort of memory file handle that the Linux side can easily convert and store as RAW data file or compressed in JPEG format.

Note that the Videocore IV also supports mp4 image compression; in future, a in-depth investigation will see if the camera can include video acquisition.

 

Previous article

PONF Project: Road Testing the Compute Module 3 as Desktop

Next article

PONF Project : Camera Shutter Preliminary

The $900 bundle includes the Rift, a pair of Touch controllers, three sensors and three facial interfaces. (Image credit Oculus)

 

Oculus announced earlier this month that the VR company was releasing a bundle package aimed at businesses in an effort to help companies build their own VR experiences for clients and customers. The Facebook-owned platform envisions businesses using the Rift for a variety of applications, including collaboration and training in several different industries. “Businesses of all types can use Rift to boost productivity, accelerate training, and present the otherwise impossible to their employees and customers—across industries like tourism, education, medical, construction, manufacturing, automotive, and retail,” stated Oculus in an earlier press release.

 

 

Several notable companies have already started using Oculus for Business, including Audi, which uses the VR headset to allow customers to build their own dream cars within an immersive environment, giving them the ability to see their creations with different perspectives. DHL uses the Oculus for training purposes through their CIS (Certified International Specialist) program, which perhaps helps workers get a better understanding of how Not to damage packages. Cisco has probably the best use of the Rift by pairing it with their Spark collaboration app, which lets users meet in a VR environment and share project information, 3D files and even draw on virtual boards that are mirrored to touchscreens in reality.

 

According to Oculus, “This is a great opportunity—not only for businesses but for the long-term viability of VR. To become an indispensable part of our daily lives, VR must continue to impact the ways we collaborate, discover, and learn, at scale. With Oculus for Business, more people will get the chance to try VR and experience the magic first-hand.” That being said, the Oculus for Business bundle features the Rift VR headset, a pair of Touch controllers, three sensors and three facial interfaces (foam pads). Oculus is also throwing in dedicated customer support, special extended licenses and warranties needed to set up the VR experience all for $900, which seems like a steal considering what businesses can do with it.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

Model-Based Design (MBD) is a mathematical and visual model-centric approach for designing and developing control, signal processing, communications, and other complex dynamic systems. MBD is an engineering paradigm shift that focuses on high-level executable models for system development and allows exploring a wide range of low-cost analysis, high-fidelity simulation, test-case generation, and early-development cycle concept proofs.

 

The core of MBD is its systematic use of development process to enable system-level and component-level design & simulation, analysis, automatic code generation, and continuous test & verification instead of relying on physical prototypes and textual specifications. As Model-Based Design considers all components (algorithms, control logic, and physical & intellectual property) usually associated with system performance, the resultant developed model becomes the source of many outputs (reports, C code, and HDL code).

 

MBD gives a standard environment to ease collaboration between development engineering groups:

- Software engineers can generate embedded code from simulation models and check if algorithms will work before writing the embedded code.

- System engineers can verify and test system components (mechanical, electrical, hydraulic, pneumatic, and software) using virtual prototyping before sending the design to hardware manufacturing.

- Mechanical engineers can create virtual assemblies (analogous to CAD software) to understand in advance how the product parts and elements will behave and interact with each other.

 

 

Because of its convenient, understandable graphical description of systems, continuous verification and validation at all stages of engineering development, as well as its inherent robustness against coding errors in early development stages, Model-Based Design has become a state-of-the-art method in automotive, aerospace, and defense industries. It is having a widespread adoption in motion control, medical & industrial applications for also minimizing the financial impact and the time-to-market with its reuse of design elements for upgrades and expansion derivatives.

Apple co-founder Steve Wozniak launches Woz U, an online university dedicated to providing skills needed to work in the tech world. Woz U wants to encourage people to learn the skills they need without worrying about debt. (Photo from Woz U)

 

Apple co-founder Steve Wozniak is hoping to encourage people to learn tech skills with his latest venture: online schools. Recently, Wozniak launched Woz U, an online university designed for those hoping to get a career in the tech industry. According to Wozniak’s statement, the goal here is to let people train in the skills they need “without putting them into years of debt.” He believes people often overlook technology-based careers due to high education costs and hopes his schools will give people the push they need.

 

Based out of Arizona, Woz U will start with online classes with programs geared towards software developers and computer support specialists. There are plans to provide programs for data science, mobile application, and cyber security sometime next year. Woz U also plans to offer platforms for tech companies to recruit, train, and retain their workforce. They’ll be able to do this via customized programs and subscription-based curricula. Eventually, Woz U hopes to move from online classes to physical campuses. They plan build campuses in over 30 cities around the world.

 

The institute will also include Woz U Education, which offers STEAM programs to K-12 schools. Woz U will also eventually introduce another program called Woz U Accelerator, which is a 12 – 16-month program launching in 2019 aimed at finding and developing “elite talent.” If it sounds a bit overwhelming don’t worry; there’s an app to help students figure out which tech path is for them.

 

There’s no word on how much classes will actually be. And even though Wozniak is behind this, it still sounds a bit too much like universities advertised during day time TV. And since it just launched, we don’t know how effective the classes will be.  It may be great for anyone who wants to get their foot in the tech industry or just learn some new tech-related skills, but you shouldn’t expect to waltz into a high paying tech job.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

Grow with Google is a new initiative dedicated to teaching people new skills and preparing them for high tech jobs. Google is here to help you when machines take over your job. (Photo from Google)

 

We’ve been hearing about robots replacing humans for a long time now, but the fact is they actually are taking our jobs. Earlier this month, a McDonald’s in Springfield installed self-ordering kiosks. And try to find a grocery store now without a self-checkout lane. The sad truth is automation will take away certain jobs from people. But Google wants to put fears to rest with their latest initiative Grow with Google. Is this a program that creates more jobs for people? No, instead it’s a program that helps people learn new skills in tech fields.

 

The idea behind this initiative is that people can take advantage of various programs hosted by Google and learn a new skill to get a job in another field or even open their own business. The new website includes different programs for students, teachers, local business, those looking for jobs, and startups. And it’s all available to you for free. There’s even one training program that’s offering 50,000 Udacity scholarships for Android and web development – you don’t need any qualifications to apply.

 

The Grow with Google initiative will also host various events around the US to provide hands-on training and classes from Googlers. This new program is available for US residents only, but the company hasn’t forgotten about the international market. Google will embark on a global effort to provide non-profits around the world with $1 billion in grants to help them prepare for the “changing nature of work.”

 

Google isn’t the only company to launch an initiative to help train people for future jobs; Facebook launched a similar program. They pledged $25.2 million in training the state’s workers for high-tech jobs. 

 

These are great tools to take advantage of, especially if you’ve figured out that art degree isn’t working out for you. Rather than denying the increase of automation in the workforce, Google acknowledges it and is doing their part to make sure people aren’t kicked to the curb. But these free programs aren’t a one-way ticket into a high tech field.

 

Sure, you may learn some new skills along the way, but you’ll be competing with thousands of people searching for those same jobs. And the truth is, companies will most likely go with someone more experienced. Even though it helps out people, it doesn't solve the problem of the shrinking job market. Perhaps the only way to answer it is to have companies reconsider replacing humans with automation, but as long as they believe they can save some money, this won’t happen for a while.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

The past year was one of the deadliest in American roads: car crashes caused 4.6 million seriously injured people and took nearly 40 thousand lives, according to the National Safety Council (NSC). The "Tri-Level Study of the Causes of Traffic Accidents" found that human errors or deficiencies were a definite or probable cause in nearly 93% of incidents they examined.


The Department of Transportation (DOT) stated above 40% of human fatalities happened because “recognition errors” that can include: inattention of the driver mostly by drinking or texting, internal and external distractions, insufficient attention or impairment. The "Human Error and Road Transport" identified those "recognition errors" as 95% contributory factors of the crashes examined.


As all of these car crashes cause high costs, not only for the economy (property damage, medical and administrative expenses, losses in wages and productivity, employer costs...) but also for public safety, environment, infrastructure, and transportation systems. Several policymakers and regulatory agencies alongside vehicle companies and scientists are currently discussing tough questions about both society’s future with vehicles that drive themselves and their interconnections.

Self-driving car innovation has the potential to drastically reduce human error or deficiencies (as autonomous vehicles simply do not face those recognition errors) and to hugely improve traffic systems, bringing a higher quality of life for everyone. Because of this, automation engineers suggest self-driving technologies can avoid traffic incidents.

Proterra put its Catalyst E2 Max to the test in Indiana and it passed with flying colors; it completed its route on a single charge. These buses could be on city streets sooner than you think. (Photo from Proterra)

 

Company made a battery the size of a bus, builds a bus around it.

 

With more electric cars driving down the highway, it makes sense that the next step is electric busses. While the technology is available, it needs to advance in order to be efficient. Proterra might have just proved they’re ready to hit the road. The California based company’s Catalyst E2 Max drove 1,102.2 miles on a single charge, breaking the world record for the longest distance traveled by an electric vehicle without recharging. Tests were conducted at Navistar’s proving grounds in Indiana. They also confirmed the results.

 

Proterra’s electric bus uses a 660Wh battery, which is stored in the 40-foot-long bus body. While they’re not the first company to have battery-powered buses, their result is impressive when compared to those of Tesla and Hyundai. Tesla’s Model S P100S has a 100kWh battery to achieve a 315-mile range. Hyundai is working on their Elec City bus that has a 180-mile range powered by a 256kWh battery.

 

At this time, there are no transit agencies using an E2 Max, but companies are showing interest. Proterra offers a model with a 350-mile range and agreed to supply Foothill Transit in Southern California with a 35-foot model with a 35-mile range that can be recharged in ten minutes.

 

The results of Proterra’s test are impressive, but there are still some factors to consider before electric buses completely replace internal combustion models. How long does it take the bus to recharge? For a vehicle like a bus, this may not be a huge issue since they don’t travel along the roads at high speeds. Also, they could easily recharge at the end of their route. But just to stay on the safe side, Proterra developed a high-speed charging system for their buses. Still, their E2 Max model needs about an hour to get back to a full charge.

 

Another factor to think about is the cost. A Proterra bus costs roughly $750,000; it’s about $500,000 for a typical diesel bus. The company does have lower operating costs, but with such a high sticker price to get the bus in the first price, they have a tough sale on their hands. With many cities facing their own budget crisis, it’s hard to argue the need for electric buses when the price is so high. So, don’t expect to see these vehicles on the road anytime soon.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

165 gigawatts (GW) of power came online in 2016, and the numbers continue to rise. (Image credit Pexels)

 

I'm not a big fan of wind-energy, So I was looking into recent developments in solar. I like what I am seeing.

 

According to a recent publication by the IEA (International Energy Agency), solar lead the way in renewable alternative energy in 2016, accounting for nearly two-thirds of new power capacity around the world with nearly 165 gigawatts of power coming online. The IEA cites a boon in solar development and deployment of PV technology from China and other manufacturers as the reason behind its popularity, which grew by 50% with over 74 GW being produced by China alone. Of course, efficient PV technology, lower prices, and changing government policies (except for the US) helped bolster those numbers, which continue to rise.

 

The publication also cites solar costs being as low as 3-cents per kilowatt-hour in countries such as Chile, Mexico, India and the United Arab Emirates, making the renewable energy an attractive alternative to fossil fuels. Surprisingly, in the US solar energy now costs just 6-cents, beating the target goal for costs by three years (2020) according to DOE projections, which did not include Investment Tax Credits that would make that price substantially lower. These findings have most experts predicting that solar energy will continue to remain in the top spot over other renewables and electricity capacity should increase by 43% over the next five years.

 

Dr. Fatih Birol, the executive director of the IEA stated, “We see renewables growing by about 1,000 GW by 2022, which equals about half of the current global capacity in coal power, which took 80 years to build. What we are witnessing is the birth of a new era in solar PV. We expect that solar PV capacity growth will be higher than any other renewable technology through 2022.”

 

While world nations are embracing renewable energy at an impressive rate, the US is starting to lag behind even though the price per-kilowatt-hour is improving with solar. In fact, the US remains in the number 2 spot (under China) in the growth market for renewables, including onshore wind and solar, however the IEA said despite these encouraging figures there lies an uncertainty of doubt over tax reforms, international trade, and energy policies could alter their attractiveness and stifle growth of renewables in the long term. Considering the Trump administration’s push for coal and other fossil fuels, the US may fall behind the rest of the world, but at least the numbers remain encouraging, at least for now.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

Nissan Autonomous Drive. (Image credit Norbert Aepli via Wiki Commons)

 

I asked, is Artificial Intelligence really a threat? Well...

 

The Senate Committee on Commerce, Science, and Transportation voted to pass Bill S. 1885- The American Vision for Safer Transportation through Advancement of Revolutionary Technologies (AV START) Act, which was introduced by Senator John Thune (R-SD) and is supposed to advance efforts to improve roadway safety through the development of autonomous vehicles.

 

What the bill does in a general overview is to allow automobile manufacturers pursue safety exemptions for autonomous vehicles based on production volume, while allowing individual states the ability to regulate registration, licensing, safety and insurance for said vehicles. This allows the Senate to establish what they call ‘a balance between federal and state laws affecting self-driving vehicles.’

 

What’s more, the bill offers several amendments with one that includes legislation to reform the TSA (Transportation Security Administration) to establish a ‘National Suicide Prevention’ hotline presumably to help prevent employees (or anyone else) of the agency from committing suicide in said AVs. Regardless, those safety exemptions are based on production volume, which is capped for each manufacturer at 15,000 the first year, 80,000 over the next three and no cap in the fourth. The bill also exempts self-driving semi tractor-trailers due to various labor unions aversions about the safety and job security regarding truckers, which are vehemently opposed to the technology altogether.

 

Companies such as GM, Ford and Alphabet Inc. have been lobbying for some time for the landmark legislation as a way to increase revenue from auto sales even though safety groups urged for more safeguards, which this bill puts into states hands regarding those regulations mentioned earlier. While some fear the bill could stymie AV development (especially in trucks) and isn’t yet sophisticated enough to relinquish control, others see this as the first stone laid in the foundation of AV rules and regulations designed at advancing the technology.

 

At least that’s what Senator Thune and other members think stating, “Today’s vote underscores the bipartisan desire to move ahead with self-driving vehicle technology.” He goes on to add, “Sen. Peters and the members of the Commerce Committee deserve credit for working together to move this bill forward toward Senate floor consideration and collaboration with our colleagues in the House of Representatives. The safety and economic benefits of self-driving vehicles are too critical to delay.”

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

The reality of Artificial Intelligence (AI) today differs wildly from the man-hunting machines we envisioned 20 years ago. What is AI today? And is it really anything to be afraid of?

 

Some 20 years ago, talk about Artificial Intelligence (AI) conjured up images of humanoid robots of superior intelligence and strength. Philosophical questions regarding the true definition of what constituted life followed, with striking images of an all-out battle between man and machine. We would most always lose those wars. Films and books have taught us to dread the AI.

 

But those fears have not come to fruition. The AI of today is a beast of different make and model – a monster, which lives in colossal databases, feeding on data. This day and age spews out a deliciously vast amount of data every second. The AI beast can consume as much as it wants, learning more as it does.

 

Google defines AI as “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

 

It is everywhere. AI powers our mobile phones. It brings smart speakers like the Amazon Echo and Google Home to life. It “lives” in computer programs like IBM’s Watson and Google’s AlphaGo. The innocuous devices sit and listen all day, now as common place as any appliance.

 

We are no longer concerned AI will one day rise and enslave mankind, and we shouldn’t be. We should be aware, however, of its potential impact on labor.

 

Researchers at Oxford recently conducted a study to observe the future of the labor market in relation to trends in computerization. The scientists observed 702 detailed occupations across all industries and found 47 percent of jobs to be at high risk of computerization. Professions like machinist, engineering technicians, librarians and telemarketers may be some of the first to go. These weren’t just the entry level jobs either. Every industry is expected to feel the impact. A  similar study conducted by Gallop found Millennials were most at risk.

 

In both studies, those expected to keep their jobs were people in leadership positions and, believe it or not, creatives. Not just artists, but those with the creative problem solving unique to the liberal arts and humanities are expected to be in increasing demand. The demand is projected to be even higher for those with both creative and technical skills, Amazon’s Senior e-Book Content Producer Amanda Koster told Forbes. If you need more proof, some companies have even created specific onboarding tracks for soft science grads.

 

The labor market has changed considerably over the last century, and it certainly isn’t about to normalize now. Change is the constant. As always, to remain relevant workers need to keep up with the trends, whatever they may be.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

One of the first dedicated 3D Printer Manufacturers to establish themselves in Spain, Tumaker has found rapid success on the B2B market. Now,Jon Bengoetxea hopes to help bring 3D printing to the masses, with a new range of consumer-focused devices that allow virtually anyone to get to grips with additive manufacturing - regardless of previous experience or technical know-how.

 

We spoke to Bengoetxea as he was preparing to showcase Voladd - the first model in Tumaker's planned consumer 3D printer line - at the IoT Solutions World Congress in Barcelona.

 

The Voladd 3D Printer from Tumaker

 

Describe your company Tumaker in a couple of sentences

 

Tumaker is the '3D-Printing Connected Things' company. As one of the few 3D Printer manufacturers to be based in Spain we have targeted two distinct business areas. The first is industrial 3D printers, designed to help improve industrial processes for a variety of applications. The second is our new venture into consumer 3D printers started with Voladd, which will be launched in October.

 

What are the origins of Tumaker?

 

I took the first steps towards establishing Tumaker back in 2012. At that time I owned a Technology Consultancy company called K35 IT Managers Group. We spent a lot of time reflecting on future trends and analysing different technologies. 3D Printing was something we quickly identified as a promising area that could be relevant for the future of many businesses.

 

Initially we began experimenting with 3D Printing technology as part of our general Research and Development, but it quickly evolved into a Lab Start-Up - creating prototypes, testing the market and so on. By 2014 it had was clear that the project had evolved, so I founded Tumaker as an independent company. Since then we've grown very quickly, from just three employees at the start to almost thirty today. I'm proud to say that we're now a leader in the Spanish industrial 3D printing market.

 

What is Voladd?

 

Voladd is the first-of-its-kind connected 3D Printer. The product represents a piece of hardware plus a software package in the cloud. You can connect via internet at home or remotely using a smartphone, tablet or PC, choosing from a catalogue of thousands of ready-to-print objects on our cloud-based web platform. You can also design and print your own objects using your preferred design software.

 

Over the next few months we're hoping to launch a community-style environment where users can upload their designs for free - it'll be a totally open-source philosophy. I see the future of 3D printing as essentially "streaming" for objects. With Voladd, it's as easy to print a digital design as it is to listen to a song in Spotify or watch a film on Netflix.

 

Tumaker CEO Jon Bengoetxea

 

What were your main goals in developing Voladd?

 

Our vision for Voladd was to empower people to create things wherever and whenever they want to, but without requiring specific technical skills. Music used to only be available on tapes and vinyl, but is now primarily consumed online. We feel that objects are destined to go the same way. They're very important in our lives, we're surrounded by them. But right now if we need something we need to either find somewhere to buy them or order online and await delivery. Voladd represents a different, more sustainable way to acquire various everyday objects that we need in our lives. If I want something, I can have it now.

 

In terms of the design, we wanted to move away from the typical box-shaped 3D printer design and create something more aesthetically pleasing that you could feel proud to have in your home. It's a printer that doesn't really look like a printer - it's more similar in design to a modern coffee machine.

 

What kind of consumer is Voladd aimed at?

 

I see 3D Printing as a very transversal technology. Think about a smartphone - is it for adults or for kids? The answer is that it's for both. That being said, we understand that as this is a relatively new technology, we need to engage in some "pedagogic" work to help people to understand how it works and all the things they can do with it.

 

Our core consumer at this stage is a person who has certain ideas and values that align with the product. Voladd can be a great tool for sustainability. There's no need for deliveries - which can impact your carbon footprint - or packaging - which is often a waste of materials. We want our product to stimulate people - it rewards intelligence, imagination and creativity. So at first, we expect our typical consumers to be what you might describe as 'techy' or 'millenial' - young people with values who are unafraid of embracing new ideas. But it's ultimately conceived as a domestic technology, so there's really no limit to the type of person who could benefit from using it.

 

Who do you see as your main competition on the 3D Printing market? How does your product stand out from your competitors?

 

Currently, 3D Printing is a very trendy piece of technology, but it does require certain skills to be able to use it - which is a significant barrier to entry. This is where we're making the difference. Our printer is for everybody, from people with no technical skills to experienced 3D printers. Voladd is truly a mass-market product.

 

Therefore, we do not have a direct competitor. This is the first initiative which contains a 3D printer and software platform in the same product, requiring no installations or additional software to get started. It is also the first sharable printer - you can go online and send an object design from San Sebastián to a printer in New York, for example.

 

Voladd, the consumer 3D printer from Tumaker

 

How has your relationship with Premier Farnell influenced the project?

 

Premier Farnell has provided the heart of Voladd's technology - the Beaglebone Black. This is the device which allows transactions between the printer and the cloud. It features all the software that allows you to select an object from your smartphone and send it directly to print. We placed our order through Premier Farnell because they are a global distributor of the Beaglebone Black, and we have a strong relationship with them.

 

Have you worked with any other partners or collaborators on Voladd?

 

The technology itself was developed in-house at Tumaker, but we've worked with a number of different investors.  One of our principle investors has been CAF - one of the most important railway companies in the world. We received design assistance from a Catalan company called Loop, who have also been involved in design work for Nespresso. All of the manufacturing, assembling and logistics is also done in Spain, primarily in the Basque Country. It's a truly European project.

 

What are the biggest challenges you’ve faced so far?

 

Our biggest challenge has been acquiring the finance - there's nothing new there. There's also the fact that our product is a complex thing to market. It's not an app or a piece of software - it's both software and hardware. Because we started from scratch, our product requires us to inspire trust in a lot of people, and to co-ordinate a lot of different people and requirements at the same time.

 

Ultimately, the key to overcoming these challenges is to trust your team and ensure that everybody is in a position to do their best. I'm very happy with what we've achieved so far. There's always a degree of anxiety when you're watching a project finally come to life, but I believe that when we hit the market we can help to create a new culture of free, near-instant object creation. I'm excited to see the new opportunities that could create...

 

Where do you hope to be in 12 months time?

 

In 12 months, we'd love to see a better global understanding of what the 3D printer can do. Obviously, we'd also like to see this understanding converted into sales for Voladd! However, we understand that not everybody is going to embrace this concept from day one - it's going to require a transitional period. For those people that do understand what we're trying to do, we hope that Voladd can establish itself as the premier commercial 3D printer on the market.

 

Voladd will be launched first in Spain, Portugal and Germany simultaneously. A Kick-Starter for a global rollout of the product will launch in October 2017.