Author Archive: Interop Staff
A guest post by Mike Tighe, Executive Director of Data Services at Comcast Business.
There’s no doubt that cloud computing is here to stay. The benefits – efficiency, lower CapEx, improved productivity, etc. – are well-documented, but to effectively reap the benefits of the cloud, you must also consider the increased role network infrastructure plays in enabling your company to operate efficiently using cloud-based systems. In fact, cloud-based services are only as good as the connection to the cloud itself.
Sure you have Internet access…
Traditional Internet access can offer a simple and cost-effective method to access cloud-based resources. But if your business is dependent on the cloud for mission-critical functions such as synching data between two offsite data centers, or backing up credit card information for your customers, the lack of consistent performance and security risks inherent with using the Internet can introduce uncertainty into your cloud model. Not to mention, if something goes wrong, your Internet service provider is not on the hook to deliver remuneration.
…but for mission-critical business applications, Ethernet is the best choice
A high-performance dedicated Ethernet connection is an attractive alternative to Internet-based connectivity for organizations that need reliable, scalable and secure connectivity to the cloud. Ethernet service is separate from, yet can integrate with an Internet connection, and can link offices, public or private data centers, production sites, and other points of operation within the same metro area to each other, and to the cloud. This ensures users experience the same performance, security and service wherever their application is housed.
Ethernet is ideal for enterprise cloud access for three key reasons:
- Reliability. Mission critical apps cannot tolerate loss of connectivity, for which the Internet is notorious. With Ethernet, not only do you benefit from service on a dedicated facility, that facility is typically fiber, which can switch to a redundant path in the event of a cut or node failure – in less than 50 milliseconds.
- Security. One of the top jobs of any IT manager is protecting the network from intrusions, and that risk is inherent with the public Internet. When using the Internet to access your cloud assets, your packets can potentially be passed from company to company –and your service provider cannot take responsibility for what happens on another service provider’s route. You can bypass the public Internet and transfer data safely and securely over a private Ethernet network where your traffic is managed by one company from end-to-end.
- Performance. The Internet may give you the throughput required to access certain cloud-based applications, but it may not. For essential, cloud-based applications that require high availability and low latency, more and more companies are using Ethernet services to access them. Public Internet access to the cloud cannot guarantee the level of performance, consistency, and availability that many enterprises require. With Ethernet, those offsite applications perform and feel like they’re an extension of your Local Area Network (LAN).
Want to learn more about the optimized cloud-enabled network? Craig Waldrop of Equinix will be giving a short presentation on “High Performance Enterprise IT” at Interop in the Comcast Business booth (#1859) Wednesday at 11 a.m. and 3 p.m. To stay on top of Ethernet news, trends, and case studies, follow Comcast Business on social media.
A post by Josh Conroy, Support Engineer at Thycotic Software.
When setting up security policies to protect your privileged accounts, administrators have to walk the fine line of providing security while still being convenient for the user.
Both security measures that are inconvenient for the user and those that present lack of security may pose a liability. These are also two reasons security precautions are dismissed by management. Here are four steps you can take to help properly secure your privileged accounts.
Changing Passwords Regularly and Using Strong Passwords. Passwords on privileged accounts should be updated system-wide on a regular basis. Rotating passwords regularly reduces the odds of passwords being cracked and helps mitigate the damage should an account be compromised.
Passwords for privileged accounts should be complex, difficult to guess and not repeated among accounts. The biggest hurdle when using complex passwords is they are difficult to remember and that difficulty encourages bad security practices, such as writing passwords down on paper, reusing passwords and choosing weak passwords.
Central Access to Privileged Accounts. Keeping one centralized, protected source of credential data is more secure than keeping logins written on paper or saved in Excel files in multiple areas across your network. A centralized location will assist in tracking your accounts. It helps to limit access to this information while still providing easy access for administrators.
Auditing Access. It is important to know who has access to privileged accounts and how often these accounts are being used. This helps to clarify which accounts need special attention so their security settings are adjusted. For example, some accounts may require stricter access control, stronger passwords or a more aggressive password change schedule.
Restricting Access. Being that privileged accounts have access to sensitive data and are used to run company critical applications, you will want to limit who accesses these accounts. Only users that work directly with an account should have access to the password. This creates accountability when using accounts that are not directly tied to a user. Additionally, having a mechanism in place to restrict privileged account access greatly improves the level of security a company has over its accounts. For example, employee account usage can be monitored, which is often important when employees leave an organization.
Following these steps will help with the protection of your privileged accounts, but implementation is always an important. Using a credential storage system specifically focused on corporate use, such as Secret Server, can help accomplish these security points as well as make the transition from the current policy to a more secure policy as painless as possible.
By Sudeepta Ray, Assistant Vice President – Technology, Aricent Group
Roy Chua and Matt Palmer, who runs the Wiretap Ventures consultancy, forecasts that SDN equipment sales will reach $35.6 billion in 2018. This figure counts hardware that doesn’t yet run SDN but the rationale behind its inclusion is that these equipment sales are being triggered specifically by SDN. In other words, this equipment is being purchased because of their SDN roadmap. Additionally, venture investment in SDN-related companies grew nearly 50-fold since 2007, to $454 million in 2012. It’s no question that the industry is gearing up to reap the gains expected from software defined networking. SDN is positioned to revolutionize the way networks can be provided and drive innovation, and many in the industry – large and small – are taking part to pave the way for SDN to reach its highest potential.
As a relatively nascent technology, building a common foundation is crucial to SDN’s success. Without it, network operators are limited in how much they can simplify management across their multi-vendor networks. Additionally, third-party software developers will have to work with each networking vendor separately which raises development costs and ultimately, hinder innovation. Seeing the potential CAPEX and OPEX gains that SDN can secure, some of the biggest companies in networking and IT – including Alcatel-Lucent, Cisco, HP, Juniper, and IBM – have collaborated to form the OpenDaylight Project, an open source initiative that aims to accelerate SDN developments and make it easier for SDN applications to be built and deployed. This foundation signifies the industry’s first major step in broadening the unification and collaboration around the emerging SDN market. The industry is aware of the potential of virtualized local and wide area networks that are easier to manage and cheaper to run than traditional infrastructures, and that a standardized approach to SDN applications will be extremely beneficial.
Meanwhile, other network infrastructure vendors are demonstrating the SDN-readiness of their solutions by adding the OpenFlow feature to their equipment, including Ethernet switches, routers, and wireless access points. In conventional networks, each switch has proprietary software that tells network switches where to send packets. With the OpenFlow protocol, packet transport decisions are centralized so that the network can be programmed independently of the individual switches. This separation of the data plane from the control plane allows for more effective use of network resources than was possible with traditional networks.
Today orchestration of Network Elements are dependent on proprietary interfaces, restricting the ability to mix and match, but a SDN infrastructure based on standardized interfaces (e.g. Open Flow) would allow this functionality among multi-vendor environments. Additional advantages of SDN include improved uptime by eliminating manual intervention and reducing risk of human errors which can shut down an entire network; better management by empowering service providers with a single vantage point and set of tools to manage the virtual networking environment and resources; and increased network transparency which leads to more effective and efficient IT planning and strategies.
With more demands being place on IT to cut costs and increase efficiency with less resources, founders of the OpenDaylight Project as well as individual network infrastructure vendors are paving the way for SDN to empower the industry to “do more with less”. The need for highly virtualized network environments and programmable networks puts SDN on the trajectory to drive new waves of network innovation. By virtualizing the network, various segments can be used for different purposes while streamlining network operations, ultimately, helping to overcome the limitations and operational challenges posed by today’s legacy networking equipment.
A guest post by Chris Taylor, Author, Successful Workplace.
It seems big data means something different to everyone. In the great debate/hype about big data, there’s no lack of opinion on the topic and it seems to mostly depend on an individual’s product, skill set and business challenges. This ambiguity shares a great deal of the blame for why the term is often polarizing and why there’s a fair amount of cynicism in corners of the marketplace. Just for fun, let’s take a look at some of the points of contention.
- Big data isn’t anything new – This is a very legitimate argument for why big data doesn’t deserve quite so much hype. You’ll hear this argument mostly from the companies that have been solving problems and earning a living with vast amounts of data for decades. There are exceptional examples of this like Nielsen, the company that started off rating the advertising value of media and morphed into consumer preference and pattern juggernaut.
- Big data is really about small data – Also a legitimate argument against some of the hype. Companies that crunch data sets, small or large, often find that the pattern exists in just one variable, like the way preferences for wine often come down to our tolerance for acidity. Some of what’s called big data isn’t big when the results come in, but it often takes large data sets to prove that a small amount of data matters…a big data paradox.
- Big data is about the right algorithm, not more data - Like the other two points, this is also mostly true. The shows up in the crowdsourced contest Netflix used to improve on the company’s Cinematch predictive powers, which became about tiny tweeks to algorithms to raise results by .01%. There was no human X factor that solved the problem. This argument pits the traditional quantitative analysts against the new breed of data scientists. You could say it is also the fight between math and science and causation and correlation. This is a fascinating debate and I suspect both sides are right in differing circumstances.
- The 3 V’s (volume, velocity and variety) aren’t enough – Coming up with a new V for the description of big data is now the object of derision. “How many V’s do you have?” comes up often as an easy way to understand someone’s perspective on the topic but has also reached the point of silliness. Gartner’s Doug Laney came up with the 3 V’s back in 2001 and the debate has raged ever since around value,
- Big data is creepy – This one really depends on the definition of creepy. People with Rain Man-like capabilities have always been able to mentally process exceptional amounts of data and that ability could be used to cheat, manipulate and get ahead. Just because we’re able to see more complex patterns in ever more data doesn’t make big data itself creepy. Its use, just like before computerization, is what can be creepy.
At the end of the day, big data is going to continue to be a topic of intense debate because so much of what we do is affected by someone’s ability to gather, analyze and then predict who we are from our patterns, even the non-transactional ones like social media use. Enterprises can’t afford to ignore the technology that their competitors are using to better understand customers and be more efficient in their operations.
Join me in Las Vegas
The big data debate is an excellent example of the value of tapping a wide variety of experts to wade through the hype and find value. I’ll be hosting the Big Data Workshop at Interop in Las Vegas on May 7th and invite you to join us to hear from a broad variety of sources like TIBCO, IBM, Datameer, Fabless Labs, QLogic, HP, Dell, Talend, and the editors of Big Data Republic and Venture Beat. Rarely will you find such an opportunity to hear a full day of so many valuable perspectives.
Receive 25% off the onsite price of a Conference Package or register for a Free Expo Pass with Priority Code “DISPEAKER“.
A guest post by Rainer Enders, CTO, Americas, NCP engineering
The Android mobile platform and its oft-publicized security limitations, along with those of other mobile operating systems (OSs), are guaranteed to be a hot topic at this year’s Interop event. After all, they have even caught the attention of the American Civil Liberties Union (ACLU), which filed a complaint against the four major cellular carriers in the U.S. for not doing enough to protect the private information of subscribers using the Android OS. Continue Reading »
A guest post by Dave Link, CEO of ScienceLogic.
I am so pleased to announce that ScienceLogic was selected as a Best of Interop finalist for the management and monitoring category.
What a privilege it is to be considered and we are super excited to be working once again this year in the Interop.net NOC. Cloud management is a difficult, amorphous term to understand given the lack of standards and the breadth of what cloud means to different technologies and users. In the end, I believe that it has never been more important to think about the management of public, private, and hybrid clouds abstracted to service management. Given the multiple tiers of a complex application, dynamic, and rapid movement infrastructure workloads are deployed upon (often geographically distributed), and the dozens or even hundreds of individual technology components that are required to work together “just right” to deliver a service, it is a vexing problem to achieve detailed and business impact Service Level Management Views – without a crazy amount of set-up and manual tools management/alignment efforts. Continue Reading »