Here’s the latest.
The U.S. Consumer Products Safety Commission (CPSC) has issued a recall for 1000-foot boxes of CE Tech CAT6 riser cable sold at The Home Depot between January and February 2013. In its recall notice, the CPSC stated that the “cable does not meet fire resistance standards for riser cable, posing a fire hazard.”
The recall notice can be seen here.
The CPSC explains that the “cable is intended to run between floors of a building as data cable. This type of cable must self-extinguish in a fire. The cable is gray and marked (UL) E316395. The cable’s box is blue and black and is marked CE Tech 1,000 ft. riser cable, Cat 6 23-4.”
If you or anyone else purchased this cable, the CPSC is advising that you remove the cable and return it to Home Depot for a full refund.
The cable sold for $100 a box, which is incredibly inexpensive for CAT6 riser cable. If you see cable being sold at a price that seems too good to be true, put up a red flag. Tested and verified cable from Black Box (or other name manufacturers) does not come cheap. You get what you pay for.
Should you go for the LCD or plasma video display? It depends. Here are a few tips to help you choose.
Plasma displays reproduce color more accurately with deeper blacks and display moving images with remarkable clarity. They provide excellent performance with their high-contrast levels and color saturation, and have the edge when it comes to viewing angles. In fact, plasma screen have as much as 160° viewing angle, whereas LCDs display at 130-140° angles. However, they also carry the risk of image burn-in (the permanent disfiguring of a screen image caused by the continuous display of a high-contrast object).
LCD displays, on the other hand, don’t have quite the color accuracy of plasmas, but they’re brighter and have a sharpness advantage with a higher number of pixels per square inch. These additional pixels make LCD technology better at displaying static images from computers or VGA sources in full-color detail. Applications with large amounts of data and written material display particular well on LCDs. What’s more, there’s no risk of image burn-in.
With LCD screens, there are essentially no parts to wear out. They last as long as their backlights do, with displays lasting, on average, 50,000-75,000 hours. That’s why LCD screens are especially good for applications such as digital signage or displays that require around-the-clock use.
Plasma screens, however, use a combination of electric currents and noble gases (argon, neon, and xenon) to produce a glow, which in turn yields brilliant color. The half-life of these gases, however, is only around 25,000 hours. The glow they produce grows dimmer over time. They’re also prone to burn-in or ghosting of images, although this is less of a problem with newer models.
Early plasmas had a very high power consumption; some as high as 5W per square inch. These values are now down in the 0.3-1.0-watt range, depending on screen size. LCDs typically run in the 0.1-0.3-watt per square inch range, and LEDs are even lower. Manufacturers are now required to provide power consumption information, but keep in mind that there are two values for consumption, default and calibrated, so be sure you’re comparing like values.
Cost: Professional v. consumer
When selecting a video display, you may find that “professional grade” or “commercial grade” models cost significantly more than televisions you can buy at the local “mega mart.” The primary difference is that professional-grade displays are built to stay on 24/7 for weeks and months at a time without breaking down. They may also offer features such as video-wall processors, scheduling options, and lockable control panels not normally found in consumer-grade televisions.
Making the choice
In general, plasma produces a clearer picture with a wider viewing angle and a better response time for fast motion playback, making it a good choice whenever you need a large screen to show a very visually active display, for instance, in applications displaying sports footage or active advertisements.
LCDs are better at displaying detailed, static information. Because LCDs are brighter, they’re ideal for venues with lots of ambient light. They’re also the best choice for 24/7 applications because of their lower power consumption. For these reasons, LCDs are preferred for professional AV display installations.
There’s a lot at risk if you install non-compliant cable, either knowingly or unknowingly. In addition to low network performance from counterfeit and low-grade cable, installing non-compliant cable can result in violations of state and local building codes and fire regulations. If a contractor installs non-compliant cable and it causes damage, such as a fire, the contractor can face civil liabilities and monetary damages stemming from negligence, fraud, and breach of contract and warranty. In addition, contractors can face criminal liabilities stemming from building code violations. Enforcement can include halting the installation and removing and replacing the cable, which can be extremely costly. Other criminal penalties can include fines and imprisonment. The costs of using counterfeit or non-compliant cable can be very high indeed.
Over the past few months there has been an increase in Intertek Testing Services (ETL) and Underwriters Laboratories (UL) warnings concerning unauthorized, hazardous, and/or counterfeit cable. The following alerts have been issued this year:
These and past alerts can be found on UL’s website under Newsroom > Public Notices. ETL warnings can be found in their Inspector Information Center. Public notices are located on the right-hand side of the webpage.
It’s estimated that as much as 20% of the cable now for sale is unsafe, unapproved, or counterfeit. Agency representatives and code officials are aware of these risks, but if you’re planning to purchase cable you need to know how to protect your team, organization, and building—literally. The YouTube video below provides tips and tricks for spotting cheap, unapproved, or counterfeit cable.
This is part four of a five part series on digital signage deployments. For part three, click here.
Today we’ll discuss the advanced digital signage deployment that includes multiple-screen/multiple-zone/multiple-room display with extensive functionality, such as individual screen messaging. If you are considering a larger deployment with a fully integrated network solution, enlist the help of a seasoned digital signage professional. Extensive negotiations, including a number of RFPs and RFQs, may be necessary to specify and negotiate the price of the system for your needs. Also pay attention to any SaaS feeds outside of hardware and labor expenses.
Advanced digital signage systems can deliver the ultimate in management, control, and functionality for K–12 institutions. These state-of-the-art systems feature heavy-duty processors for playing bulky media files and streaming seamless video in higher resolutions. They’re fully networked, large-scale solutions that are designed for scalable, multiscreen, and even multilocation deployments.
These types of systems are well-suited for large school districts with many buildings in different locations, specifically districts that need to be able to display a wide range of bandwidth-heavy media and stream (or narrowcast) unique content to the individual screens based on location and time of day, and be able to verify playout on those screens. This stage adds a video server residing on the network, which means you can add live video through the use of connected cameras as well as streaming and stored video capability.
The price of these systems is really infinite, as you have the ability to add as many screens as possible. But once you go into multiple locations, you want immediate central management capabilities.
Once you get into more sophisticated systems, you want play logs for advertising, etc., but most of all, you want the remote management capabilities to know if screens are on, if the media delivery system is working, and if the content is being displayed. You also want the full capability of making real-time changes to react to last-minute district-wide decisions, athletic event cancellations, changes in bus schedules, or other events. Literally within seconds, changes can be made, deployed, and seen in one location or over the entire network in many locations.
This is part three of a five part series on digital signage deployments. For part two, click here.
A step above the moderate solution is one with TV capability. This solution is for producing the same content on all screens and encompasses multiple-screens/multiple-zones/multiple-room displays with live TV capabilities.
Moderate with TV capability ($5500 to $8000) — $$$
This system is very similar to the moderate system, except that this level gives users the ability to integrate live TV into the digital signage content. This is done via a TV tuner or capture card that is part of the media player. It picks up TV signals via satellite or digital cable, much like a receiver on consumer TVs.
This becomes particularly useful if you don’t readily have the ability to update content. In lieu of this content, your displays can show programming from acceptable sources—network news channels or your local community’s public access channel, for example—in a split-screen configuration on your signage. It’s also nice for situations when you need up-to-the-minute information, like updates from the Weather Channel or bulletins from the Emergency Alert System (EAS).
Typically, when reaching the moderate and TV-tuner level, you use a higher level of digital signage software. Higher-end software not only enables you to create multiple content zones on the screen, but also easily schedule content for each zone (so you can schedule content for the day, week, or month by zone) and better control elements on the TV feed, as well as content override features for interrupting routine content streaming with emergency alerts programmed from a remote location.
Best areas for use: Same as areas listed on previous page; but because of TV input, can be useful in school cafeterias or any room for extracurricular events, faculty break rooms, and school TV studios and media production departments.
Content-delivery method: Network infrastructure, satellite feeds, cable television.
Pros: Provides live TV feeds to complement on-screen content; can provide instant messaging and emergency notification; usually includes more content-management capabilities and functionality.
Cons: Maintenance of a satellite or TV feed and IP connection; more advanced software training required; potential bandwidth and network maintenance issues; additional ongoing maintenance and software licensing costs.
NOTE: Estimated prices for solutions include a 42-inch LCD screen, media player, and digital signage software. Prices can vary depending on a number of factors.
This is part two of a four part series on digital signage deployments. For part one, click here.
Last week we reviewed the basic price point and model of the ultra-affordable digital signage solution for schools. Today, we’ll discuss the moderate solution, or a solution that includes multiple-screen/multiple-zone/multiple-room displays, with the same content on all screens.
Moderate ($4500 to $7000) — $$
The biggest differences between the moderate and ultra-affordable systems are that with moderate systems, you can display more than one area (zone) of content within a presentation and the same content can be seen on multiple screens in multiple rooms at single site. What’s more, the players are often network enabled and support streaming of video (not just from a file loaded onto a storage device). Plus, you typically have the ability to stream live Web feeds as a standard feature.
A zone is an on-screen area (measured by pixels or as a percentage of entire screen) that shows content from its own playlist. Because moderate-priced systems support multizone presentations, you can play different media in different screen areas. Some zones can change while other areas remain fixed. The zones may or may not be resized or moved to a different location on the screen. In most cases, each zone can be managed individually so you can dynamically change the content as needed. One zone might show video, while another shows the local weather forecast. Still another area might show a changing menu or schedule. It’s all up to you.
Because this type of system also supports multiple screens, it’s great for broadcasting information to different areas of your building or campus. Plus, because it can be installed on a network, you can control multiple screens from a central PC. This control can be in real time and include instant-messaging capability. Some systems also give the administrator the ability to turn the screens on and off. In addition, screens can also be controlled remotely with a browser and an IP address for additional flexibility for the administrator who has to be away from his or her command station throughout the day.
These systems frequently include a tool for aggregating RSS feeds, so you can collect and automate the distribution of Web-based info, such as live CNN news or National Weather Service bulletins, as video crawls on your signage. This is a time-saver because the administrator no longer has to constantly gather Web content, worry as much about inappropriate subject matter streaming to the screen accidentally, or write any extra code. These feeds can also be from local law enforcement and internally created sites, from teacher blogs and departmental sites, for instance. Still, because it requires an Internet connection, you have to have adequate bandwidth, and initial and ongoing IT support, as well as deal with permissions, access, and admin rights.
Best areas for use: Small to midsize school buildings with multiple entrance points, food-service lines, and lobbies where students and the public gather; also for schools with a building-wide network infrastructure and various departmental Web sites.
Content-delivery method: Existing or designated network infrastructure.
Pros: Multiple screens can be controlled via the network connection; content and screen operations can be updated remotely from a central PC; enables RSS feeds and other real-time content from the Internet, including streaming video.
Cons: Adding an IP connection means IT involvement; advanced software may require additional training; potential bandwidth and network maintenance issues, as well as the increased content “gatekeeper” role of the administrator.
NOTE: Estimated prices for solutions include a 42-inch LCD screen, media player, and digital signage software. Prices can vary depending on a number of factors.
When it comes to deploying digital signage, schools have an almost unlimited amount of options. We’ve organized them into four major categories to help you select the most appropriate system to support your objectives, application, and budget:
• Ultra-affordable: Single-screen/single-zone/single-room display
• Moderate: Multiple-screen/multiple-zone/multiple-room display—same content on all screens
• Moderate with TV capability: Multiple-screen/multiple-zone/multiple-room display with live TV—same content on all screens
• Advanced: Multiple-screen/multiple-zone/multiple-room display with extensive functionality, such as individual screen messaging (may or may not include live TV tuner capability)
Over the next few weeks we’ll cover each one of these price points. Today, we’ll focus on the ultra-affordable solution.
Ultra-affordable ($3500 to $5000) – $
This category represents the “down-and-dirty” solution—one screen, one media player, and one USB or flash drive. This type of solution is not networked; instead, staff members in a particular school building or classroom transfer new content to screens by inserting USB or flash drives into media players on-site.
“This type of solution is ideal for a lobby, behind the desk in the main office, or outside a gym or auditorium. It’s a relatively low-cost method of creating and displaying messaging,” says Brian Kutchma, Black Box VP of Sales. “It’s a great way for smaller schools with a limited budget to capitalize on some of the benefits of digital signage. With a plug-and-play AC power outlet media player, an LCD or a plasma screen, and a little effort to learn some out-of-the-box software, you can easily implement digital signage.”
There are no instant-messaging capabilities, and the screen must be turned on and off manually. This system provides a single-zone (PowerPoint like) presentation with looped content. On more advanced systems, you can display one message or incorporate multiple messages on the same screen. Typically, that’s not the case with these entry-level type players.
But the single zone look may actually work to your advantage if the signage is an area with a lot of foot traffic—an area where people are unlikely to stop and take the time to peruse a screen streaming a mix of content fields. If there’s one message you want to get out at any given time—“Wear your school colors today!” “Track meet is cancelled,” “School pictures tomorrow”—then the single-image screen approach may be best.
Come up with a content strategy early in the process. The most challenging part of any signage system is the content. It’s critical that anyone considering signage has a plan in mind and the resources in place to create and manage the content.
Districts using ultra-affordable solutions like this one usually have a one-screen deployment, so changing content and turning the screen on and off manually isn’t an issue. Also, users usually like the plug-and play ease of this kind of system.
Best areas for use: School offices, lobbies, cafeteria food-service lines, libraries, employee break areas.
Content-delivery method: Removable storage devices: USB drives, compact flash, SD memory cards.
Pros: Low-cost, easy-to-manage solution for one-screen deployments and single locations; plug-and-play operation.
Cons: Low flexibility. Content must be manually changed through removable storage devices. Content is displayed in a singlezone, looped play with no instant-messaging capability. Screens must be manually turned on and off. Lack of scalability.
NOTE: Estimated prices for solutions include a 42-inch LCD screen, media player, and digital signage software. Prices can vary depending on a number of factors.
You’ve heard, of course, of Moore’s law, which states that computing power doubles every 18 months. The actual number is probably more like 20 months, but the upshot is that computing evolves at a breathtaking rate, which basically implies that any device you have is obsolete in about three years.
That means to stay up to date and maintain your efficiency, your network devices should be replaced approximately every three years. This includes switches, routers, servers—virtually everything with a “brain.” Infrastructure such as cabling and racks has a somewhat longer lifespan—especially if you used fiber with an eye to the future, but even that must be replaced eventually to keep up with changing network demands.
But when budgets are tight, it’s tempting to delay upgrades. This can result in some short-term savings; however, down the road there are costs to not replacing out-of-date hardware. Those costs eventually often far outweigh short-term cash flow benefits and are likely to be higher than just upgrading your hardware in the first place.
Here are some reasons why you should retire those aging network devices:
Network hardware starts to deteriorate after about three years, so older equipment is far more prone to breakdowns, and failed equipment can cost far more in manpower—not to mention downtime—to fix the problem than it would cost to replace the equipment in the first place. Planned upgrades are likely to be less expensive than unplanned equipment failure.
A properly configured network should be stable and require little or no unscheduled maintenance. When a network starts needing unscheduled service calls, it’s probably time to look at the age of your equipment.
Newer network equipment is more energy efficient than ever before. The new IEEE 802.3az standard for Energy Efficient Ethernet reduces power consumption by 50% or more by scaling down power consumption during periods of low data activity. Energy-efficient hardware also generates less heat, lowering cooling costs. Newer printers not only use less energy but have a lower per-page cost than older models.
Power is a large part of any IT department’s budget, so using energy-efficient equipment can yield significant savings that often offset a large part of the cost of upgrading to newer equipment.
Moore’s law also states that processor speed doubles every 18 months. This exponential increase in processing power means that older equipment is significantly slower than today’s equipment and may bog down when running newer software. Sluggish servers and workstations mean hours of lost productivity and frustration waiting for computers. Old machines running old software create inefficiency and are increasingly incompatible with the systems of other companies and with new technology such as smartphones.
This goes back to the maintenance issue. If you push your hardware until it stops working, downtime is inevitable. If your network crashes or goes down for maintenance, you lose work time. Then you not only have to pay for hardware to be replaced anyway, you experience devastating lost productivity and lost sales. And the worst part is that networks tend to crash and burn when they’re being used the hardest—in other words, when you’re busiest.
So although at first glance, it may seem to make sense to hang onto your systems a while longer, keeping equipment for much more than three years starts to pile up hidden costs that go up sharply with time and technological advancement.
Some network mistakes turn up over and over again—these mistakes cost organizations money, time, and even loyal customers. What these mistakes all have in common is that they mainly reflect a lack of planning. A network that runs smoothly and delivers top performance with minimal downtime takes thought, organization, an awareness of current technology, and a plan.
Here we present three common pitfalls. If you pay attention, you don't have to fall into them, too.
1. Non-standard construction
Because data centers are larger and more complex than ever, “seat-of-the-pants” construction doesn’t really work well anymore for any network much larger than a home network. “Guesstimating” can eventually lead to all kinds of problems ranging from overheating to inadequate power to lost data.
To standardize best-practice network construction, in 2005 the Telecommunications Industry Association (TIA) published the TIA-942 standard that set requirements for network architecture, system redundancy, security, file backup, hosting, and power management, as well as a number of other procedures. TIA-942 covers not just the network itself but also supplemental services. Over half the standard covers matters such as electrical systems, HVAC, fire detection and suppression, and building construction. The standard defines four tiers of data centers, with Tier 1 being a simple server room and Tier 4 being a mission-critical data center with high security and redundancy.
TIA-942 helps to ensure consistency and produces networks with high reliability, expandability, and scalability. Because TIA-942 is intended to optimize network performance, a sure-fire way to sub-optimal network performance is to ignore the standard and creatively cut corners. Unfortunately, many installers do cut corners—either to cut costs or sometimes because they don’t know any better.
When having a data center built, insist that the contractor build to TIA-942 standards. Have your data center independently audited and certified. This precaution could save you from future demons such as power disturbances, overheating, and downtime.
2. Neglecting physical security
If anyone can wander into your server room, if you have network ports in public spaces, or if your building access control is substandard, you have a huge hole in your network security. Unrestricted physical access to a network is a much larger security threat than is generally appreciated because, if a person has physical access to a device, there is almost always a way to take control of it or to get data out of it. The fastest way into a network is not through the firewall, but through a USB port on an unattended workstation. Your most dangerous information thief may not be a faraway hacker, but one of the cleaning staff inside your building.
This is why it’s important to secure your hardware—a lost laptop, an open USB port, or a simple network tap can be a conduit for quick and devastating data loss that no firewall can prevent. Today’s digital printer/copiers store copies of pages, so it’s essential to scrub their memories before they leave your premises when they reach the end of their service life. Think also about the paper generated and make sure that sensitive printouts are destroyed before they’re discarded.
There are many ways to ensure the physical security of your network, from simple port locks to sophisticated remote monitoring systems. At minimum, doors and cabinets should be kept locked and laptop computers secured. Biometric locks add an extra layer of security. Video surveillance has become so practical and inexpensive that there’s no reason not to use it in secure areas.
3. Insufficient technical support
In the middle of a busy business day, the network goes down and your support staff is either nowhere to be found or unable to deal with the problem.
Even the most perfect, well-planned network will eventually have difficulties leading to downtime. It’s the nature of beast that downtime always occurs at the most critical and inconvenient times and especially over holiday weekends. That’s why—if your network is vital to your operation—you need to have an experienced tech available 24/7.
Small businesses in particular often don’t have enough support staff or have insufficiently trained support staff—too many small companies still rely on a staff member who’s “good with computers” but has no real training. This worked to some extent in the days when computers were a lot simpler, but today’s dense, high-speed network environment requires a lot more expertise.
The tech support problem can usually be worked around by contracting with a service that provides network support and is always on call. There are other advantages to working with a network services company. For instance, they can often also manage network installation and act as partners who are knowledgeable about current network technology and can make the network more efficient and cost effective.
Although a network is virtually indispensable in today’s business environment, there are many ways to inadvertently sabotage it, creating unnecessary expense, frustration, and downtime. The best way to avoid the many network pitfalls is through careful planning, meticulous organization, and a willingness to ask for professional help when it’s called for. This is by no means a comprehensive list. What would you consider to be the top networking mistakes? The failure to standardize? Not educating network users? Post your comment in the comment section below.
2012 was jam-packed with network breaches and 2013 will be no different. It’s important to learn and understand new attack methodologies and take a proactive approach to defuse these threats. In this blog post we’ll share a few simple formulas to reduce risk, comply with regulations, and harden your systems against cybercrime.
The first formula is based on U.S. military basic war tactics and is called the four Ds. They are:
1. Detect – awareness of a threat
2. Deter – preempting exploitation
3. Defend – fighting in real-time
4. Defeat – winning the battle!
The second formula is well known in network security circles and is called the “Risk Formula”:
R = T x V x A
(R)isk = (T)hreats x (V)ulnerabilities x (A)ssets
So, to fully understand your risks, you need to deal with:
Threats = Cybercriminals, malware, malicious insiders
Vulnerabilities = Weaknesses that threats exploit
Assets = People, property, your network, devices, etc.
Now, let’s put these two formulas together—the 4Ds and the Risk Formula—to build a more proactive, next-generation defense:
4Ds x R = [4Ds x T] x [4Ds x V] x [4Ds x A]
Using the 4Ds with the Risk Formula:
You’ll never be 100% secure, but you can dramatically reduce your risk and proactively defend your organization by containing and controlling threats, vulnerabilities, and assets.
More on security:
Every business depends on its network to run efficiently at all times. No one can afford network outages or degradations due to poorly planned infrastructure changes. The following three steps help mitigate risks when managing network change, while also ensuring faster and more cost-effective implementations. If any one step is skipped or done incorrectly, costlier problems can potentially develop later.
Discovery and baselining
Network professionals must first know what they’re dealing with. Discovery means asking: What kind of equipment exists? What is the traffic today? Who are the users? It should include hardware inventory, applications, router configurations, switch configurations, network cabling and protocol usage. Engineers should evaluate current network performance, including traffic patterns, bandwidth optimization, Internet connectivity, and network vulnerabilities.
Baselining means creating documentation of the current state so there is something to work from to plan changes and measure against to validate them.
The next step is designing the plan for making the changes using the documentation as a guide. What is the end goal and how will you get there? This is the stage at which the IT team makes decisions about reconnecting, the addressing scheme, server location changes, etc., then creates a design to facilitate those decisions.
The third step is validating the design after implementation. Are all the devices configured correctly? Did a user get moved? Did the switch get changed? Network professionals verify that changes were made, then document, report and baseline the network again for future reference.
There is a way to speed up the process without sacrificing precision. A network analyzer makes following the process outlined above easier, particularly if the device includes all of the following capabilities:
For more tips on how an analyzer can assist with managing change quickly, efficiently, and accurately, download the complete White Paper on Proven techniques and best practices for managing infrastructure changes.
SFP, SFP+, and XFP are all terms for a type of transceiver that plugs into a special port on a switch or other network device to convert to a copper or fiber interface. These compact transceivers replace the older, bulkier GBIC interface.
Although these devices are available in copper, their most common use is to add fiber ports. Fiber options include multimode and single-mode fiber in a variety of wavelengths covering distances of up to 120 kilometers (about 75 miles), as well as WDM fiber, which uses two separate wavelengths to both send and receive data on a single fiber strand.
SFPs support speeds up to 4.25 Gbps and are generally used for Fast Ethernet or Gigabit Ethernet applications. The expanded SFP standard, SFP+, supports speeds of 10 Gbps or higher over fiber. XFP is a separate standard that also supports 10-Gbps speeds. The primary difference between SFP+ and the slightly older XFP standard is that the SFP+ moves the chip for clock and data recovery into a line card on the host device. This makes an SFP+ smaller than an XFP, enabling greater port density.
Because all these compact transceivers are hot-swappable, there’s no need to shut down a switch to swap out a module –it’s easy to change interfaces on the fly for upgrades and maintenance.
Another characteristic shared by this group of transceivers is that they’re OSI Layer 1 devices –they’re transparent to data and do not examine or alter data in any way. Although they’re primarily used with Ethernet, they’re also compatible with uncommon or legacy standards such as Fibre Channel, ATM, SONET, or Token Ring.
Formats for SFP, SFP+, and XFP transceivers have been standardized by multisource agreements (MSAs) between manufacturers, so physical dimensions, connectors, and signaling are consistent and interchangeable. Be aware though that some major manufacturers, notably Cisco, sell network devices with slots that lock out transceivers from other vendors.
Is has become almost automatic to protect your data center by backing up your servers, installing firewalls and virus protection, and keeping the protection up-to-date.
But what about more tangible threats? Do you have hot spots in your racks? If the cooling system shuts down, how will you know when temperatures climb out of control? Are you alerted to humidity changes or water leaks that threaten your equipment?
Planning for the unexpected is a critical task because there are more systems performing mission critical functions than ever before. These systems are often deployed without the proper environmental infrastructure to support them. Equipment density is increasing constantly, which is creating more stress on ventilation and power.
What’s an environmental monitoring system?
Environmental monitoring products enable you to actively monitor the conditions in your rack, server room, data center, or anywhere else you need to protect critical assets. Conditions monitored include extreme temperatures, humidity, power spikes and surges, water leaks, smoke, and chemical materials. With proper environmental monitoring, you’re alerted to any conditions that could have an adverse effect on your mission-critical equipment. These products can also alert you to potential damage from human error, hacking, or prying fingers.
Environmental monitors consist of three main elements: a base unit, probes or sensors, and network connectivity and integration. The base unites may contain one or more built-in sensors, as well as ports for hooking up external probes. Additionally, they include an Ethernet port have software for remote configuration and graphing. This software may also work with existing network management software, such as SNMP systems.
Measurement: The environmental monitoring appliance displays the values measured by the attached probes, e.g. temperature, humidity, airflow, status of dry contact, door, motion detector, and other sensors.
Data collecting and graphing: The measurements are periodically stored in the internal memory or external storage media and displayed as graphs.
Alerting: When the measured value exceeds the predefined threshold, it triggers an alert: a blinking LED on the front panel, an audible alarm, SNMP trap, e-mail, text message, etc. The environmental monitoring appliance can also active an external alarm system like a siren or strobe light.
Benefits of environmental monitoring:
Reduced downtime – When things go wrong, you’re the first to know. Minimize downtime by being alerted about conditions that cause damage to servers and other network devices.
Increased profits – They help you cut replacement equipment costs and redistribute your workforce more effectively.
Increased employee satisfaction – With built-in notification features like e-mail, SMS, and SNMP traps, a remote monitoring system enables employees to better manage their work.
That’s where Black Box’s environmental monitoring systems enter the picture. The AlertWerks System consisted of SNMP-enabled, Web-based monitors that alert the user to any abnormal conditions. AlertWerks monitors multiple environmental conditions, including temperature, humidity, airflow, smoke, security, and voltage. For more info, visit blackbox.com/go/AlertWerks.
Power over Ethernet (PoE) is invaluable for powering devices such as surveillance cameras, VoIP phones, and wireless access points over the same UTP cable used for data. In the last decade, this technology has matured and gone from a hodgepodge of home-brew and proprietary methods to the safe, reliable IEEE 802.3af PoE standard and the 802.3at PoE+ standard. But many misconceptions from the early days of PoE still linger. Here are the top five misconceptions about today’s PoE:
Power over Ethernet is the same as Ethernet over powerline.
These concepts are often confused because they are the inverse of each other—power over Ethernet uses existing data lines to send power; Ethernet over powerline uses existing electrical wiring to send Ethernet.
Power over Ethernet requires special wiring.
Because PoE operates on CAT5, CAT5e, or CAT6 cable with RJ-45 connectors, there’s no need to modify or upgrade your existing cable infrastructure to use PoE.
PoE requires electrical expertise.
Although early home-brew PoE required electrical expertise and a lot of calculating, today’s 802.3af/at standards-based PoE requires no special electrical expertise. IEEE 802.3af and 802.3at PoE can be installed without worry about whether a device is getting the wrong amount of power or—worse—getting power when it shouldn’t be. This is because PoE power source equipment (PSE) communicates with powered devices (PD) to determine power requirements.
A 802.3af or 802.3at PSE doesn’t add power to the data line until the PD indicates that it’s compatible. The PD may have an optional power class that indicates its power requirements to the PSE, enabling the PSE to budget its power load. A PSE also advertises its maximum power to the PD, which is not allowed to draw more than its allocated power.
All PoE is the same.
The most common PoE today is standards-based IEEE 802.3af and 802.3at PoE—802.2af provides power up to 15 watts per port and 802.3at provides up to 25 watts per port to support higher-powered devices. 802.3at is backwards compatible with 802.3af.
But in addition to standards-based PoE, there are other methods for delivering power over data lines, including legacy proprietary PoE, high-wattage PoE, and passive PoE. Different kinds of PoE are not interchangeable, and you may damage a device by connecting it to the wrong kind of PSE.
You need to buy all new network components to add PoE to an existing network.
There’s no need to buy new PoE switches—just add midspan power injectors such as Black Box PoE Gigabit Injectors to existing switches. The PoE Gigabit Injector family is available in 802.3af and 802.3at versions in sizes ranging from 16-port rackmount models all the way down to our 802.3at PoE Gigabit Injector (LPJ001A-T), which enables you to add 802.3at PoE to just one port.
You can even power non-PoE devices by using a PoE splitter such as the Black Box Gigabit PoE+ Splitter (LPS2001), which separates the PoE power on the data line to provide DC power to a device’s power jack.
Putting aside all the “buzz and hype” about digital signage, when it comes right down to it, you may still be wondering if it’s right for you, your business, or your organization. As a basic guide, if you answer yes to any of the following questions, chances are, it’s something to consider more closely:
You may have already entertained these questions and decided to move forward with a plan to implement digital signage. You may even be shopping for the system at this time. If you are like the majority of people considering digital signage, you have discovered that the marketplace is filled with vendors coming out of the woodwork touting their technologies and software as the latest and greatest solution. Evaluating the competing systems can be a daunting task.
If this is not complex enough, you also have to determine whether a given supplier will meet your specific needs not only in products but also selection, availability, price, and service after the sale. Black Box can help you no matter where you are in the deployment cycle or what your level of technical or creative expertise is. We can also work with your determined budget. For more information, engage with one of our digital signage success managers at 724-873-6553 or download our White Paper on The Roadmap to Digital Signage Success.