Galaxy Office Automation

Container Adoption Trends: Why, How and Where

Container Adoption Trends: Why, How and Where

Benchmark your application strategy with data. Read this ASR survey of IT decision makers about adoption of containers and Kubernetes.

Application containerization—packaging software to create a lightweight, portable, consistent executable—delivers technical and business advantages over conventional delivery methods. Containerized apps are quickly deployable for easy scaling, run in diverse environments and offer security advantages thanks to their isolation from other software. In combination with orchestration software such as Kubernetes, containers can also be centrally dispatched, managed and scaled for IT agility.

In September 2021, Dell commissioned Aberdeen Strategy and Research (ASR) to survey hundreds of IT decision makers with experience in choosing or deploying containers. The goal was simple, to better understand how and why containers and Kubernetes are being deployed at mid-size as well as larger enterprises, assess container-related performance advantages and uncover challenges associated with Kubernetes and container environments. The survey found that on average over 50% of applications are containerized.

Among the use cases for container adoption highlighted in the results are the expected drivers of application development and testing. Other interesting drivers include server consolidation, multi-cloud capability and automating the pipelines from application code to production environments. Interestingly, the survey highlighted the fact that the deployment of third-party applications and services is cited as a driver more frequently than the in-house development of custom applications. Even for organizations that do little more than tie together existing applications with lightweight scripts or use off-the-shelf applications, containerization offers logistical benefits.

It should be no surprise that security, time-to-market, improved deployment capabilities and driving efficiencies are cited as key drivers by respondents to this survey. Also, some common inhibitors to adoption were cited including enabling technology that is too complex to justify the effort, uncertainty around security capabilities, lack of internal know-how and fear of spiraling costs.

Application deployment trends found by the survey show that while container adoption is widespread, virtual machines continue to lead as the deployment mechanism for applications. This points to the need for a pragmatic approach to enterprise architectures that assumes the co-existence of VMs and containers for the foreseeable future. Furthermore, organizations cited the strong need for support for both public cloud and private cloud deployment options with a hybrid approach being pursued by over two-thirds of surveyed organizations.

Original research like this is a great way to benchmark how your IT strategy aligns with industry trends. Please read the executive summary of the results and also reference the infographic summarizing how Dell Technologies and VMware solutions provide a pragmatic approach for container adoption.

Author: Bob Ganley, Dell Technologies Cloud Product Group

Source: https://www.dell.com/en-us/blog/container-adoption-trends-why-how-and-where/

FOR A FREE CONSULTATION, PLEASE CONTACT US.

Galaxy Recognized as Dream Company to Work For by HRD Congress

Galaxy Recognised By The World HRD Congres As One Of The Dream Companies To Work For Inder IT/ITES Category

We are a leading technology solutions provider that helps organisations to digitally transform their business. With PAN India presence, supported by 200+ certified committed professionals, we design and implement IT infrastructure solutions to deliver cost-effective, agile and scalable solutions to meet our customer’s present as well as futuristic needs. Recently, we have been recognized by the World HRD Congress as one of the Dream Companies to work for under IT/ITES category and were awarded this prestigious award at the 30th Edition of the World HRD Congress & Awards in Mumbai on 23rd March 2022.

The World HRD Congress recognizes organizations who have demonstrated excellence and innovation in the field of IT/ITES. The goal of the World HRD Congress is to provide a platform showcasing dream companies that individuals can work for in various industries. The nominations are evaluated by an eminent jury comprising of senior professionals based on pre-defined criteria and go through a rigorous six-step process from receiving the entries to the final rankings and includes a presentation by the short-listed companies on innovative HR practices, company values, work culture, CSR and more. You can check out the link for more details about the ranking and awards at http://dreamcompaniestoworkfor.org .

We received the award for ensuring employee happiness & satisfaction along with job security and clear road maps and avenues for growth. We have always strived to provide an environment for innovativeness where everyone has a responsibility and ownership to continuously improve what they are doing.

While expressing pride and happiness over the recognition, Mr. Anoop Pai Dhungat, Managing Director, stated “This is a important milestone for us and we will continue to invest our management time and focus on creating a highly committed workforce and delivering great value to our customers. We strive to keep up the good work by our HR team and continue to improve our workplace culture for the future and move towards being a great organization.”

Looking ahead in line with the Company growth story we are looking at a overall headcount growth rate of 20 percent during the year. We also believe in selecting talent from campus and grooming them in various areas of technology and operations. Each year over the past three years, this has been one of our areas where our hiring has focused on.

The Five R’s Of Application Modernization

The Five R's Of Application Modernization

Most organizations realize that application modernization is essential in order to thrive in the digital age, but the process of modernizing can be highly complex and difficult to execute. Factors such as rapidly growing application volume, diversity of app styles and architectures, and siloed infrastructure can all contribute to the challenging nature of modernization. To add to this complexity, there are multiple ways to go about modernizing each individual application. Depending on business and technical goals, you may opt to lift-and-shift some apps, while containerizing or even refactoring others. Either path then results in varying degrees of time commitments, app performance, and ultimately, comparing the level of effort needed to meet an organization’s anticipated return on investment.

THE FIVE R’S

The Five R’s are a set of common modernization strategies that organizations can use when moving applications to modern infrastructure and cloud native application platforms. The first step to efficiently modernizing your application portfolio is to determine the best strategy for each app based on business needs and technical considerations (e.g., how much effort will be involved in modernizing the application and the target infrastructure platform for the app).

Refactor
Refactoring refers to making significant source code changes to the application (rewriting the application or service), typically using cloud native technologies such as microservices and application programming interfaces (APIs)s. While the process can be complex and laborious, this strategy actually provides the most benefit for high-value systems and applications that require frequent updates and innovation.

Replatform
Replatforming involves containerizing an application and moving it to a Kubernetes-based platform. There may be small code changes needed to take advantage of the new environment. This strategy is commonly implemented when moving applications running on virtual machines (VMs) to container-based apps running on a modern app platform or public cloud infrastructure.

Rehost
Rehosting refers to changing the infrastructure or operation of an application without changing the application itself. This is often done to gain the cost benefits of the cloud when the rate of change to an application is low and wouldn’t benefit from refactoring or replatforming.

Retain
Retaining involves optimizing and retaining an application as-is. This strategy might be used when there is data that can’t be moved, or a modernization that can be postponed.

Retire
Retiring is when a traditional application is no longer used or replaced with an off-the-shelf software-as-a-service (SaaS) offering.

THE RELATIONSHIP BETWEEN TIME AND VALUE IN YOUR APP MODERNIZATION STRATEGY

In most cases, the higher the business value of an application, the greater potential benefit there is to undergo more change. By refactoring primarily business-critical and high-value apps, you can maximize your team’s precious time while prioritizing the applications that have the most to gain from more flexible architectures and scalable infrastructure. Applications that remain unchanged for long periods of time and don’t hinder your company’s ability to innovate don’t need to be rewritten. When the goal is to increase IT efficiencies and decrease IT costs for apps requiring infrequent updates, you’ll be better off rehosting or replatforming these applications.

HOW TO ASSESS AND DISPOSITION YOUR PORTFOLIO

The main factors that play a critical role in a successful and actionable modernization strategy fall into three categories: technical, business, and organization/people. VMware helps organizations jumpstart app portfolio modernization by analyzing and prioritizing these considerations and more through service engagements like VMware App Navigator in our Rapid Portfolio Modernization program. By assessing and dispositioning your application portfolio, you can determine which of the Five R’s will be the best course of action for each of your apps.

For technical factors, consider variables such as application framework and runtime, architecture design, dependencies, and integrations. Tools such as Application Transformer for VMware Tanzu and our Cloud Suitability Analyzer can help streamline this discovery and analysis. For business factors, consider elements like business criticality, licensing costs, and time-to-market factors. For organizational and people factors, consider domain expert availability, organizational and team structure, and calendar dependencies.

Ultimately, there are lots of facets to consider when deciding the best course of action for each application in your portfolio. But, by leveraging this framework with VMware as your partner, you can standardize and simplify your strategy to efficiently assess and disposition your portfolio.

LANDING ZONES

Once you have determined which apps you want to refactor, replatform, and rehost, where do these apps go after they’re modernized? We call the new target infrastructure “landing zones,” which may include some combination of on-premises, public cloud(s), Kubernetes, VMs, platform as a service (PaaS), and bare metal. Because of the dynamic nature of applications and the complexities of enterprise IT budgets, choosing the right landing zones is rarely as simple as just identifying the least expensive option.

To determine the best landing zones for your apps, consider factors like data gravity, developer experience, potential cloud exit strategies, and implications to the mainframe.

HOW TO GET STARTED

We’ve established what the Five R’s are, the relationship between effort to change and expected value in app modernization, app disposition strategies, and how to decide on the right landing zones. But how do you get started on this app modernization path? Here’s a guideline:

Get Buy-in: make sure all the stakeholders for an application are brought into the modernization effort.

Set Expectations: provide as much visibility as possible into the time and effort that a modernization project will require. Avoid over-promising and under-delivering.

Restructure when Needed: prepare for your organizational structure to evolve as modernization efforts advance. Pay attention to how other companies have organized, but don’t just assume the same approach will work for you.

Prioritize Your Portfolio: analyze your applications and divide them under the Five R’s: refactor, replatform, rehost, retain, retire.

Look for Patterns in Your Portfolio: identify commonalities among your applications, looking for architectural technical design similarities.

Choose the Right Starting Point: pick one or a few small(ish) projects that will help you start on the right foot in terms of building skill, momentum, or both. Or, focus on one or a few groups of similar applications, selecting a representative application in each group to start with.

Make Smart Technology Decisions: don’t choose a set of technologies simply because it’s what the “cool kids” are using. Make sure your choices are right for your organization.

Break Down Monoliths: plan carefully to decompose monolithic applications into more manageable pieces without worrying about satisfying any cloud native purity tests.

Pick Platforms Pragmatically: base cloud and platform choices on the needs and capabilities of your organization.

Interested in following this guideline? VMware’s Rapid Portfolio Modernization program brings automated tooling and proven practices to execute upon each of these steps in a seamless and effective way.

Ultimately, the best app modernization path is one that aligns with your business goals, can produce results quickly, and is agile enough to evolve along with demands. The Five R’s provide you with a framework to best disposition your apps in a way that reduces the overwhelming nature of app modernization.

Want to learn more about how to kickstart your application modernization efforts? Check out our eBook A Practical Approach to Application Modernization.

Author: VICTORIA WRIGHT

Source: https://tanzu.vmware.com/content/blog/the-five-rs-of-application-modernization

FOR A FREE CONSULTATION, PLEASE CONTACT US.

Three Ways To Optimize Your Edge Strategy

Three Ways To Optimize Your Edge Strategy

Enterprises can use these methods to move from proof-of-concept to a production edge platform that delivers a competitive advantage.

In enterprise IT circles, it’s hard to have a conversation these days without talking about edge computing. And there’s a good reason for this. “The edge” is where businesses conduct their most critical business. It is where retailers transact with their customers. It is where manufacturers produce their products. It is where healthcare organizations care for their patients. The edge is where the digital world interfaces with the physical world – where business critical data is generated, captured, and, increasingly, is being processed and acted upon.

This isn’t just an anecdotal view. It’s a view backed up by industry research. For example, 451 Research forecasts that by 2024, 53% of machine- and device-generated data will initially be stored and processed at edge locations. IDC estimates that, by 2024, edge spending will have grown at a rate seven times greater than the growth in spending on core data center infrastructure. In a word, this kind of growth is enormous.

WHY EDGE?

What’s behind the rush to the edge? The simplest answer to that question is that business and IT leaders are looking for every opportunity they can find to achieve a competitive advantage. Eliminating the distance between IT resources and the edge achieves several different things:

  • Reduced latency– Many business processes demand near real-time insight and control. While modern networking techniques have helped to reduce the latency introduced by network hops, crossing the network boundaries between edge endpoints and centralized data center environments does have some latency cost. You also can’t cheat the speed of light, and many applications cannot tolerate the latency introduced by the physical distance between edge endpoints and centralized IT.
  • Bandwidth conservation– Edge locations often have limited WAN bandwidth, or that bandwidth is expensive to acquire. Processing data locally can help manage the cost of an edge location while still extracting the maximum business value from the data.
  • Operational technology (OT) connectivity– Some industries have unique OT connectivity technologies that require specialized compute devices and networking in order to acquire data and pass control information. Manufacturing environments, for example, often leverage technologies such as MODBUS or PROFINET to connect their machinery and control systems to edge compute resources through gateway devices.
  • Business process availability– Business critical processes taking place in an edge location must continue uninterrupted – even in the face of a network outage. Edge computing is the only way to ensure a factory, warehouse, retail location, or hospital can operate continuously and safely even when it is disconnected from the WAN.
  • Data sovereignty– Some industries and localities restrict which data can be moved to a central location for processing. In these situations, edge computing is the only solution for processing and leveraging the data produced in the edge location.

As companies implement edge computing, they are moving IT resources into OT environments, which are quite different from the IT environments that have historically housed enterprise data. IT teams must adapt IT resources and processes for these new environments.

Let’s talk about the state of many edge implementations today and how to optimize your path forward.

MOVING BEYOND PROOFS OF CONCEPT (POCS)

The process of implementing and operating edge computing isn’t always straightforward. Among other things, edge initiatives often have unclear objectives, involve new technologies, and uncover conflicting processes between IT and OT. These challenges can lead to projects that fail to move from the proof-of-concept stage to a scalable production deployment.

To help organizations address these IT-OT challenges, the edge team at Dell Technologies has developed best practices focused on moving edge projects from POCs to successful production environments. These best practices are derived from our experience enabling IT transformation within data center environments, but they are adapted to the unique needs of the edge OT environments. To make this easy, we have distilled these best practices down to three straightforward recommendations for implementing edge use cases that can scale and grow with your business.

  1. Design for business outcomes.

Successful edge projects begin with a focus on the ultimate prize — the business outcomes. To that end, it’s important to clearly articulate your targeted business objectives upfront, well before you start talking about technology. If you‘re in manufacturing, for example, you might ask if you want to improve your production yields or to reduce costs by a certain amount by proactively preventing machine failure and the associated downtime.

Measuring results can be difficult when you are leveraging a shared infrastructure, especially when you are trying to look at the return on investment. If your project is going to require a big upfront investment with an initial limited return, you should document those business considerations and communicate them clearly. Having specific business goals will enable you to manage expectations, measure your results as you go, and make any necessary mid-course corrections.

  1. Consolidate and integrate.

Our second recommendation is to look for opportunities to consolidate your edge, with an eye toward eliminating stove-piped applications. Consolidating your applications onto a single infrastructure can help your organization realize significant savings on your edge computing initiatives. Think of your edge not as a collection of disconnected devices and applications, but as an overall system. Virtualization, containerized applications, and software-defined infrastructure will be key building blocks for a system that can enable consolidation.

Besides being more efficient, edge consolidation also gives you greater flexibility. You can more easily reallocate resources or shift workloads depending on where they are going to run the best and where they are going to achieve your business needs. Consolidating your edge also opens opportunities to share and integrate data across different data streams and applications. When you do this, you are moving toward the point of having a common data plane for your edge applications. This will enable new applications to easily take advantage of the existing edge data without having to build new data integration logic.

As you consolidate, you should ensure that your edge approach leverages open application programming interfaces, standards, and technologies that don’t lock you into a single ecosystem or cloud framework. An open environment gives you the flexibility to implement new use cases and new applications, and to integrate new ecosystems as your business demands change.

  1. Plan for growth and agility.

Throughout your project, all stakeholders must take the long view. Plan for your initial business outcomes, but also look ahead and plan for growth and future agility.

From a growth perspective, think about the new capabilities you might need, and not just the additional capacity you are going to need. Think about new use cases you might want to implement. For example, are you doing some simple process control and monitoring today that you may want to use deep learning for in the future? If so, make sure that your edge infrastructure can be expanded to include the networking capacity, storage, and accelerated compute necessary be able to do model training at the edge.

You also must look at your edge IT processes. How are your processes going to scale over time? How are you going to become more efficient? And how will you manage your applications? On this front, it makes sense to look at the DevOps processes and tools that you have on the IT side and think about how those are going to translate to your edge applications. Can you leverage your existing DevOps processes and tools for your off-the-shelf and custom edge applications in your OT environment, or will you need to adapt and integrate them with the processes and tools that exist in your OT environment?

A FEW PARTING THOUGHTS

To wrap things up, I’d like to share a few higher-level points to consider as you plan your edge implementations.

Right out of the gate, remember that success at the edge depends heavily on having strong collaboration between your IT stakeholders and your OT stakeholders. Without that working relationship, your innovations will be stuck at the proof-of-concept stage, unable to scale to production, across processes, and across factories.

Second, make sure you leverage your key vendor relationships, and use all the capabilities they can bring to bear. For example, Dell Technologies can help your organization bring different stakeholders within the ecosystem together through the strong partnerships and the solutions that we provide. We can even customize our products for particular applications. Talk to us about our OEM capabilities if you have unique needs for large edge applications.

Finally, think strategically about the transformative power of edge, and how it can give you a clear competitive advantage in your industry. But always remember that you are not the only one thinking about edge. Your competitors are as well. So don’t wait to begin your journey.

Author: Philip Burt, Product Manager-edge strategy, Dell Technologies.

Source: https://www.dell.com/en-us/blog/three-ways-to-optimize-your-edge-strategy/

FOR A FREE CONSULTATION, PLEASE CONTACT US.

5 Enterprise Tech Predictions Following An Unpredictable Year

5 Enterprise Tech Predictions Following An Unpredictable Year

We all went into 2020 with a plan. Those plans were rendered irrelevant just a few months into the year. Organizations quickly rolled out contingency plans and put non-essential initiatives on hold. This may lead one to believe that 2020 was a wash for technology innovation. I would argue otherwise. In fact, organizations deployed inspired solutions to tackle considerable challenges.

Here are a few observations from 2020 and five enterprise tech predictions for 2021.

The Edge Is the New Frontier for Innovation

Amazing things are happening at the edge. We saw that on full display in 2020. Here are a few examples:

  • When the pandemic first hit, a lab testing company rolled out 400 mobile testing stations across the United States in a matter of weeks.
  • A retailer relocated their entire primary distribution center, which was in a state under stay-at-home orders, to fulfill an influx of e-commerce orders from a new location.

These organizations used existing edge investments to react and innovate with velocity. And in the year ahead, we will continue to see prioritized investment at the edge.

Network reliability and performance directly impacts employee and customer experience. That alone led to expansive SD-WAN rollouts at the edge and in-home offices. Simple SaaS-delivered solutions (inclusive of hardware) will further improve security and user experience wherever employees choose to work. And this will start a trend in which these solutions become the norm.

Additionally, I expect organizations to increasingly adopt a secure access service edge (SASE) solutions. Legacy network and security architectures create unnecessary hair pinning and performance degradation. Instead, our future will lie in application and infrastructure services that are defined in software and deployed and managed as software updates. While upending legacy procurement processes along the way, organizations will dramatically improve performance and security.

We are also getting far more intelligent at the edge, with the ability to learn, react and optimize in real-time. Furthermore, we are seeing new opportunities for infrastructure consolidation at the edge, reducing the number of specialized appliances required to meet technology needs. This is an exciting development as it opens doors for cost-positive solutions where you improve automation, safety, and efficiencies, while simultaneously reducing costs.

Decentralization of Machine Learning

Staying at the edge for another moment, let’s talk about federated machine learning (FML). We are starting to see early uptake in this area among businesses. Across all industries, organizations are innovating to make better data-driven decisions, while leveraging highly distributed technology footprints.

With compute capacity practically everywhere, federated learning allows organizations to train ML models using local data sets. Open source projects, such as FATE and Kubeflow, are gaining traction. I expect the emergence of intuitive applications on these platforms to further accelerate adoption.

Early ML solutions disproportionately benefited a small percentage of enterprises. These organizations had mature data science practices already in place. ML adoption continues to pick up pace. And that acceleration is driven by turnkey solutions built for “everyone else.” These are enterprises that want to reap the rewards of ML without having to make large investments in data science teams—often a difficult challenge given the industry shortage of data scientists today.

Renewed Momentum for Workplace 2.0 Initiatives

The pandemic brought renewed momentum for many Workplace 2.0 initiatives. I’m especially interested in augmented reality (AR) and virtual reality (VR) use cases.

AR and VR are gaining traction, especially in use cases like employee training, AR-assisted navigation (such as on corporate campuses), and in online meetings. This year, I had the opportunity to participate in a VR meeting. The cognitive experience was quite fascinating. While on a Zoom meeting, it’s quite obvious that you’re on a video call. But after a few minutes in a VR meeting, you start to feel like you are actually in the room together.

There’s still work to be done to drive mainstream adoption. 2021 will see gains in the adoption of AR and VR, aided by advancements in enterprise-class technologies that address security, user experience, and device management of these solutions.

That said, the biggest gap for VR, in my opinion, is that there is not an equivalent to Microsoft PowerPoint for VR. In other words, in the future, I want to be able to quickly create 3D content that can be consumed in a VR paradigm. Today, there simply is not an easy productivity tool that would allow anyone to quickly create rich 3D content that takes full advantage of the 360-degree panorama afforded by VR. I expect this to be an area of focus for AR and VR technologists moving forward.

Continued Evolution of Intrinsic Security and Data Protection

Innovations in the security space brought intrinsic security from what some called a marketing buzzword into something real.

For instance, today one can leverage virtualization technologies to secure a workload at the moment it is powered on, prior to even an operating system being installed. That is intrinsic security by definition and represents a major step forward from the traditional security model.

In 2021, security will once again be amongst the top technology investments for the year, with both ransomware and security at the edge getting increased attention. Sophisticated ransomware attacks are not just targeting data, but also data and system backups. This creates the potential that even system restores are compromised.

We need to change how we protect systems and data. We need to fundamentally rethink what it means to back up and recover systems. Legacy solutions with static protection and recovery approaches will start facing the potential for disruption as the year progresses.

When we look at the edge, a growing number of technology decisions are being made by the lines of business—sometimes even at a local level—and not central IT. This has long created challenges as smart and connected devices are deployed at edge sites faster than traditional IT processes. While we should always strive toward deploying compliant solutions, we need to accept the fact that business velocity and agility requirements can be in conflict.

To that end, we must look at technologies that offer broader discovery of connected systems at the edge and provide adaptive security policy enforcement for those systems. Instead of fighting the battle for control, security leaders must accept there is some degree of chaos and innovate with the expectation of chaos as opposed to outright control.

Applying New Technologies to Old Challenges

In 2021, what’s old may be new again—at least in taking another look at how new technologies can help solve old challenges.

For example, in the area of sustainable computing, there is a lot of energy efficiency to be gained in the traditional data center. VMware currently has an xLabs project to help our customers optimize hot and cool aisle spaces in their data centers. Early studies revealed a promising amount of energy efficiency can be gained through platform-driven data center heat management.

Additionally, machine learning may soon help improve accessibility. Earlier this month, we announced a project spearheaded by VMware technologists to help developers conduct better automated accessibility testing with machine learning. This project will make it easier for organizations to meet accessibility standards, while reducing costs for the software they build.

2020 was a year of determining progress. Unforeseen challenges taught us to plan and architect for the expectation of change. And we must be resilient to adapt to new ways of living and working.

2021 ushers in hope as we navigate whatever our new normal will be. And I’m excited to see how that new normal will be shaped by advancements in technology.

Author: Chris Wolf, VP, Advanced Technology Group, VMware

Source: https://www.vmware.com/radius/5-enterprise-tech-predictions-2021/

FOR A FREE CONSULTATION, PLEASE CONTACT US.

Multi-Cloud: Strategy Or Inevitable Outcome? (Or Both?)

Multi-Cloud: Strategy Or Inevitable Outcome? (Or Both?)

Multi-cloud is top of mind for many technology leaders. What are the benefits? The challenges? And ultimately, is it a right fit for the business and its teams? There’s no consensus about multi-cloud—as evidenced in a recent Twitter thread I started. So, let’s break down why. I’ll share my view of multi-cloud and possible approaches to cloud strategy implementation without (yet) getting into how VMware speeds your cloud journey. Then, let’s hear about yours. There are lots to cover about strategy, so I’ll start with definitions.

What is Multi-Cloud?

One of the biggest challenges that surfaced in the discussion of multi-cloud is that we all have slightly different definitions of multi-cloud.

First, as a starting point, a commonly agreed-upon definition of hybrid cloud:

Hybrid cloud: consistent infrastructure and operations between on-premises virtualization infrastructure/private clouds and public cloud.

Hybrid cloud is about connecting on-premises environments and public cloud.  The key distinguishing characteristic of a hybrid cloud is that the infrastructure (and thus operations) is consistent between on-prem and cloud. This means the same operational tools and skillsets can be used in both locations and that applications can easily be moved between locations as no modifications are needed.

Now to define multi-cloud:

Multi-cloud: running applications on more than one public cloud.

This definition could mean a single application/app that is stretched across clouds but more often means multiple apps on multiple clouds (but each app is contained entirely on a single cloud). It could mean that the underlying cloud is partially or completely abstracted away, or it could mean that the full set of cloud capabilities is available to the apps.  Perhaps confusingly, multi-cloud can include on-premises clouds too!  This is just a generalization of the “many apps in many locations” definition.

Multi-Cloud Approaches

Having apps running on multiple clouds presents challenges: How do you manage all these apps, given the vastly different tooling and operational specifics across clouds? How do you select which cloud to use for which apps? How do you move apps between clouds? How much of this do you want to expose to your developers?

There are a variety of approaches to multi-cloud that offer different trade-offs to the above problems.  I see four primary approaches businesses are taking to multi-cloud:

  • No Consistency: This is the default when a business goes multi-cloud. Each cloud has its own infrastructure, app services (e.g., database, messaging, and AI/ML services), and operational tools.  There is little to no consistency between them and the business does nothing to try and drive consistency.  Developers must build apps specifically for the cloud they’re using.  Businesses will likely need separate operations teams and tooling for each cloud.  But apps can take full advantage of all the cloud’s capabilities.
  • Consistent Operations: The business aligns on consistent operations and tooling (e.g., governance, automation, deployment and lifecycle management, monitoring, backup) across all clouds, each with its unique infrastructure and app services.  Developers still build apps to the specifics of the cloud and moving apps between clouds is still a large amount of work, but the business can standardize on an operational model and tooling across clouds.  This can reduce the cost of supporting multiple clouds through consolidated operations teams with less tooling and increase app availability through common, well-tested, and mature operational practices.
  • Consistent Infrastructure: The business leverages a consistent infrastructure abstraction layer on top of the cloud.  Kubernetes is a common choice here, where businesses direct their developers to use clouds’ Kubernetes services.  VMware Cloud is another option, as it’s the consistent VMware SDDC across all clouds.  Common infrastructure standardizes many parts of the app, allowing greater portability across clouds while still leveraging the common operational model (which is now more powerful as the infrastructure has been made consistent!).  Developers can still take advantage of each cloud’s app services though, which is where some cloud stickiness can creep in.
  • Consistent Applications: The business directs its developers to use consistent infrastructure abstraction and non-cloud-based app services for their apps.  This builds on Consistent Infrastructure by also specifying that any app services used must not come directly from the cloud provider.  Instead, app services can be delivered by ISVs (e.g., MongoDB, Confluent Kafka) as Kubernetes operators or as a SaaS offering (e.g., MongoDB Atlas, Confluent Cloud).  Apps are now easily portable across clouds and cloud selection is totally predicated on cost, security, compliance, performance, and other non-functional considerations.

It’s important to note that no approach is generally “better” than any of the others.  Each approach comes with tradeoffs and it’s up to the business to decide which is best for it based on its unique needs and requirements.  And in some cases, businesses may leverage more than one approach, with different apps or development teams taking different approaches.

Strategy or Inevitable Outcome?

The natural next question is whether you should go multi-cloud.  In an ideal world, running all your apps on a single cloud is likely best for most businesses.  You can standardize everything you’re doing to that one cloud, simplifying app implementation and operations.  Apps can take advantage of all the specific innovative features of that cloud.  You can negotiate higher discounts with the cloud provider because you have higher usage than you would if you spread your workloads over many different clouds.

The problem, though, is that it’s very hard to run all apps in only one cloud.  Acquisitions may be using a different cloud.  After the acquisition closes, the question is then whether to move all the apps onto a single cloud (likely using precious time and resources that could be invested in integrating that acquisition) or living with multi-cloud.  Shadow IT is still happening in many businesses, where developers or lines of business make independent decisions to use another cloud technology, meaning you’ll likely end up in a multi-cloud situation even if you try to avoid it.  Finally, even if you can deal with those problems, what if your preferred cloud isn’t innovating in a new area as fast as another cloud?  It may be necessary for the business to start using that other cloud, putting you into a multi-cloud world.

The general takeaway is, try as hard as you might, being a single cloud likely won’t be the case for very long.  Something will happen to make you go multi-cloud.  Really, it’s just a question of whether it’s due to a proactive strategy or an inevitable outcome because of one or more of the above reasons.  In either case, having a multi-cloud plan is a must!

Author: Kit Colbert, VP & CTO, Cloud Platform BU at VMware

Source: https://octo.vmware.com/multi-cloud-strategy-or-inevitable-outcome-or-both

FOR A FREE CONSULTATION, PLEASE CONTACT US.

Dell’s 2021 Server Trends & Observations

Dell’s 2021 Server Trends & Observations

A summary of the top enterprise trends to look forward to in 2021.

With the start of a new year, we can say goodbye to the tumultuous and challenging 2020 – a year that brought about monumental changes in our industry through acquisitions, technology introductions, and of course, a shift to remote work force. No one could have predicted all the changes that happened last year, but now we have an opportunity to look back on how the server trends and technologies detailed by us last year impacted our industry. And as we have done for the past several years, we want to continue our tradition in Dell’s Infrastructure CTO group of highlighting some of the most interesting technology and industry trends we expect to see impacting our server incubation and product efforts this year. These trends were compiled by polling our senior technologists in the server organization, who are looking at the most impactful influences to their workstreams.

When the technologists provided their inputs, the underlying theme that emerged was the desire to life cycle data – curate, transport, analyze and preserve data – in the most effective and secure means, while producing the most efficient business outcomes from the infrastructure. Since the generation of data has continued to increase, customers are looking for ways to leverage third-party services in an integrated offering to allow them to understand how to more quickly analyze and value the right data in the most cost-effective and secure manner. This paradigm has also forced owners of the IT equipment that performs these analyses to ensure they are using the most effective technology integrations. These integrations need to be managed with the minimal amount of operational expenses across a continuum of Edge, Core and Cloud architectures. Finally, 2020 created another challenge to carry forward, which is the ability to adopt new technologies with more remote staff and remote users stressing the infrastructure in means and methods not expected this early in most digital transformation plans.

So, with that introduction, let us provide you the Top Trends for 2021 that are influencing our server technology efforts and product offerings:

  • aaS becomes the Enterprise theme. As technology velocity continues to rise, enterprise customers deal with constrained budgets and legacy skillsets while still needing to focus on differentiated business outcomes with the most beneficial price/performance and least amount of bring up and maintenance overhead. The options for on-prem Infrastructure aaS offerings allow customers to be nimble and focus on their business value through diverse deployments while maintaining their data security and governance with trusted infrastructure.
  • Server Growth is Building Vertically. As customers look for the most efficient outcomes from their infrastructure, the industry will continue to see more verticalization and specialization of offerings. Integrated solutions will address packaging and environmental considerations; SW ecosystem enablement and domain-specific accelerators address unique performance and feature requirements that are optimized for specific business outcomes.
  • More Data, More SmartsThe challenges with data velocity, volume and volatility continue and require the continuance of AI/ML adoption for analytics while an increased focus on solving data life cycling challenges arises. The integration of data curation models, transport methods, preservation and security architectures with faster analysis will all be key to support and monetize the Internet of Behaviors.
  • The Emergence of the Self-Driving Server. Customers will start seeing the use of telemetry, analytics, and policies to achieve higher levels of automation in their systems management infrastructure. Similar to the driver-assist/autonomy levels of autonomous vehicles, AI Ops capabilities with systems management will usher in the era of moving automated tasks to automated decisions, with implementations showing up in addressing runaway system power and policy advising recommendation engines.
  • Goodbye, SW Defined. Hello, SW Defined with HW Offload. Application architectures are evolving to create control plane and data plane separation. Control planes stays as a software layer and the data planes move to programmable hardware in the form of service processor add-in-cards which allow bare-metal and containerized applications to run with disaggregated infrastructure software (network virtualization, storage virtualization, GPU virtualization, security services), creating Intent-Based Computing for customer workloads.
  • 5G is Here! Seriously, it is this year. After several years of hype and promises, we will see the proliferation of 5G and with it will come shifts in paradigms around communication infrastructure, remote management models and connectivity that impact server form-factors and features. As businesses develop more edge infrastructure to handle the generation and influx of data, 5G will create the need for customers to reevaluate their edge connectivity and infrastructure management offerings to take advantage of 5G capabilities.
  • Rethinking Memory and Storage to be Data Centric. The industry is moving from Compute-Centric Architectures to Data-Centric Architectures and that transition is driving new server-connected memory and storage models for IT. Technologies around persistent, encrypted and tiered memory inside the server along with remotely accessed SCM and NVMe oF data through new industry fabric standards are creating innovative IT architectures for optimal data security, scaling and preservation.
  • Adopting new Server technology while Being Remote. The world has changed, and businesses have been forced to not just map a digital transformation but realize it to operate. Companies dealing with faster digital transitions of tools, processes and infrastructure need to operate with a remote work force. This transition is forcing companies to evaluate new server technologies and assess resource requirements which will emphasize the necessity to utilize server capabilities around debug, telemetry and analytics in a remote fashion to keep business continuity going forward.
  • It’s not a CPU Competition, it’s a Recipe Bake-off. The processor landscape is changing, and it is becoming an environment of acquisitions, specializations and vendor differentiated integrations. We see Intel, AMD and Nvidia all making acquisitions to provide each with CPUs, DPUs and Accelerators in their portfolio. The real winner will be able to leverage their portfolio of silicon products and software libraries to form recipes of integrated offerings for targeted workloads to help end-customers optimize business outcomes.
  • Measure your IT Trust Index. Security around server access and data protection has never been more challenging, so customers need to be able to quantify their security confidence in order to gauge infrastructure trustworthiness and identify digital risks. Customers need to analyze product origins and features, new security technologies and segment-specific digital threats in the backdrop of the increasing regulatory landscape to formulate their measurement of IT trust from the Edge to Core to Cloud.

Author: Stephen Rousset, Dell ISG Technology and Innovation Office

Source: https://www.delltechnologies.com/en-us/blog/dells-2021-server-trends-observations/

FOR A FREE CONSULTATION, PLEASE CONTACT US.

Top Tips For Securing The Remote Workforce In 2021

Top Tips For Securing The Remote Workforce In 2021

This year, organizations found an unprecedented amount of their workforces suddenly working remotely. No longer under the same roofand networkas their colleagues, IT teams were tasked with making crucial adjustments to how they secure their workforce. With little preparation or planning, IT leaders became responsible for an influx of corporate devices across numerous locations and different networks.  

In 2021, businesses will continue to allow employees to work from homeHow can IT ensure they are able to effectively support their organizations in a long-term remote work environment?   

Throughout the remote work revolution, we pulled together some helpful tips to guide IT through some of the challenges of supporting an almost entirely remote workforce. While these tips were written in the early stages of the pandemic, they are worth revisiting as your team looks towards rebuilding and planning in 2021:  

KEEP THE USER TOP OF MIND

It is unlikely most of your employees were prepared to work from home fulltime back in March. Throughout the past ten months, your employees have had to alter their physical home environment to accommodate a workspace 

Some might be missing day-to-day human interaction. Others are probably tired of navigating technical difficulties without an IT admin right down the hall. While it’s unclear when consistently working in an office will be a reality again, one thing is certainyour IT team should be helping new and existing employees feel empowered in their long-term remote work environment.  

For new hires and existing employees, there is plenty of onboarding and technical considerations you should be thinking about

  • Are new hires using a corporate desktop or their own personal computer?  
  • Do employees of every level have a stable internet connection?  
  • Can the corporate VPN handle the additional capacity to support a large remote userbase? 
  • Does your organization’s helpdesk have the capacity for increased ticket volume?  

Having solutions in place to tackle these technical issues will not only help employees feel supported from an IT perspective but can help prevent potential security threats in the user’s remote work environment – which could ultimately throw a wrench in your business continuity plans.

CHOOSE THE TECHNOLOGY THAT WORKS BEST FOR YOU

When the majority of the workforce went remote, one common question we heard was people asking whether VDI or VPN was better solution. The answer is not as clear as you may think. 

To determine which option is best for your organization, your IT team must define and rank its top priorities. What will be the fastest, easiest or cheapest for you to deploy for your specific situation? How important are these factors to you? What will provide the best user experience and work for the most usersAnd what provides the security model that’s appropriate for your organization? 

In the short term, it is wise to do whatever the company was most familiar with, to get employees safely working remotely as soon as possible. But it 2021, organizations may be revisiting this question. Yes, some employees will be coming back into offices, but overall, most organizations will continue to be much more distributed than before 2020.  

As companies optimize their approach to remote work, many will look again at their VDI and VPN strategies and consider some of the pros and cons we wrote about in MarchBut in addition, they will also take time to consider other initiatives, such as Zero Trust security and Windows 10 management from the cloud.  

BE PROACTIVE WITH INTRINSIC ZERO TRUST SECURITY

Business have had to make adjustments in order to ensure the safety of their employees and the smooth operation of their businessSecuring the enterprise is a lot easier when all its endpoints (laptops, mobile devices, etc.), applications and users are within the network perimeter. This model was starting to break down long before this year, but of course the effects of 2020 accelerated this trend like never before. 

To secure the enterprise beyond the perimeter, IT leaders should adopt a Zero Trust security model. Unlike the traditional security model, Zero Trust does not implicitly trust any device, user or app. Instead, it continuously verifies trust across all three before granting access to data.   

This security model offers greater flexibility and choice to employees to work from anywhere and from any device while ensuring optimal security at all times. And while most organizations agree that Zero Trust is the right approach to address the security needs in a dynamic environment, many haven’t taken a holistic approach to deploying it. 

This could coincidentally leave holes in your organization’s security posture, leaving cybercriminals with a valuable opportunity to exploit. In order to proactively secure your reputation and companyemployee and customer data, intrinsic security cannot be an after-thought.  

Invest in a solution that helps you preventdetect and remediate as quickly as possible for business continuity and productivity.  

While these tips have proven useful for employers navigating how to best secure their remote workforce, it’s important to remember that the journey to intrinsic security is just that – a journey.  

As we enter 2021, new techniques will likely emerge as businesses evolve with the world around them. Your IT teams are no different. They must continue working towards solutions that will both empower and protect their staff.

Author: EUC Editorial Team, Vmware

Source: https://blogs.vmware.com/euc/2020/12/tips-for-securing-the-remote-workforce-2021.html?src=so_5fcff0dc35951

FOR A FREE CONSULTATION, PLEASE CONTACT US.

Edge Computing

Edge Computing

As we move into a new decade of 2020, technological advancement is surely at the top priority of every organization. We are living in a data-driven world where the estimated data generated by average person is 1.5 GB per day. With the increasing number of IoT devices/applications and bulk data generated, performing computation at data centers or cloud servers isn’t sufficient or efficient. Cloud computing has proved its efficiency to many enterprises by focusing on core business competence and reducing cost. Using cloud, the data was sent and accessed from a remote server. It uses Internet-based services to support business processes. But the built-in latency of cloud is no longer capable of deploying machine intelligence and getting real-time outputs. Truly the technology has made the lives of people easier but that’s not all and here comes Edge computing to make computing faster. It’s the next wave in evolving data center infrastructure, powered by Internet of Things (IoT) technologies and 5G network.

Over the past many years, organizations have started to integrate cloud into their infrastructure. Few thinks that edge computing might replace cloud computing. But that’s not true, it just means that cloud is coming near to you. Organizations need to implement both edge and cloud computing together to handle the ever-growing data in coming future. Surely, edge computing will reduce the volume of data transmitted over the internet to cloud, but it cannot replace the cloud. Despite the fact the major part of data is processed or analyzed at the device, but data still needs to be stored somewhere for future reference, and that’s where cloud is important. Edge and cloud are used just as we use our two hands.

What is Edge Computing?

The term “Edge” in this context, correlates to the geographic distribution of network resources. Edge computing allows to perform data collecting, analyzing and computing close to the data source instead of relying on a centralized cloud network that can be thousands of miles away. Edge computing distributes data from centralized network and deploys it to micro-data centers which makes them closer to data generation.

Key advantages of Edge Computing

  • Data without latency: Edge computing permits in speeding up data transmission because data travel time decreases. It enables applications to respond to data almost abruptly.
  • Reduces Internet Bandwidth: Relying less on cloud means certain data or applications can be operated reliably offline. This will be very useful in areas where network connectivity is low.
  • Provides certain level of Security: As data is collected and operated at local level, sensitive data transfer to cloud can be avoided and hence impact would be less if the cloud has been cyber attacked.
  • Reliable: Edge computing has a great security advantage which make it more reliable. In situation like data center downtime, IoT edge computing devices can operate uninterrupted because the vital processing is done locally. The chances of unavailability of data entirely is almost completely zero.

IoT development can be seen in businesses across every industry. IoT devices will need edge computing and 5G network to work effectively. Providentially, the expansion of edge computing will make easier for businesses to scale their operations. Companies no longer need to set up centralized or private data centers that are expensive to assemble, maintain or replace when its time to expand data analysis. Organizations can easily expand their edge network quickly and cost-effectively by combining alliance services with edge computing data centers. Adaption to evolving markets and scaling their data needs, organizations doesn’t have to rely upon a centralized infrastructure anymore.

Edge computing can serve variety of businesses and is a diversion for almost all sectors of the world whether manufacturing or construction, financial or health care, people will slowly adopt the edge. Banking sector is implementing edge computing to provide ATMs with potential to collect and process data with faster response time. For finance firms those dealing with funds and shares are also adopting edge computing by placing servers in data centers near stock exchanges to provide accurate and up to date information without any lag which could lead to real loss of money.

Enterprises investing in edge computing should know that it is not a general-purpose platform like cloud computing but it’s a specialized approach to solve specific set of issues. Just adopting the technology because it’s a trend doesn’t serve the need. So, organizations should first ensure whether the investment is really required. Edge computing still have to overcome various challenges to be practically used on a larger scale. But once edge computing becomes the wave, it’s going to change the way business is done.

Are you using cloud computing and ready to upgrade your game point of business by advancing to Edge computing?

Please write to us at marketing@goapl.com

FOR A FREE CONSULTATION, PLEASE CONTACT US.

Cisco: 5 Hot Networking Trends For 2020

Cisco: 5 Hot Networking Trends For 2020

CISCO EXEC SAYS SD-WAN, WI-FI 6, MULTI-DOMAIN CONTROL, VIRTUAL NETWORKING AND THE EVOLVING ROLE OF NETWORK ENGINEERS WILL BE BIG IN 2020

Hot trends in networking for the coming year include SD-WAN, Wifi 6, multi-domain control, virtual networking and the evolving role of the network engineer into that of a network programmer, at least according to Cisco.

They revolve around the changing shape of networking in general, that is the broadening of data-centre operations into the cloud and the implications of that change, said Anand Oswal, senior vice president of engineering in Cisco’s Enterprise Networking Business.

“These fundamental shifts in where business processes run and how they’re accessed, is changing how we connect our locations together, how we think about security, the economics of networking, and what we ask of the people who take care of them,” Oswal said.

WI-FI 6 AND 5G

First up, wireless technology – especially Wi-Fi 6 – will get into the enterprise through the employee door and through enterprise access-point refreshes. The latest smartphones from Apple, Samsung, and other manufacturers are Wi-Fi 6 enabled, and Wi-Fi 6 access points are currently shipping to businesses and consumers.

5G phones are not yet in wide circulation, although that will begin to change in 2020, athough mostly for consumers and towards the end of the year. Oswal wrote that Cisco projects more people will be using Wi-Fi 6 than 5G through 2020.

2020 will also see the beginning of a big improvement in how people use Wi-Fi networks. The potential growth of the Cisco-lead OpenRoaming project will make joining participating Wi-Fi networks much easier, Oswal said. OpenRoaming, which uses the underlying technology behind HotSpot 2.0/ IEEE 802.11u promises to let users move seamlessly between wireless networks and LTE without interruption — emulating mobile network connectivity. Current project partners include Samsung, Boingo, and GlobalReach Technologies.

2020 will also see the adoption of new frequency bands, including the beginning of the rollout of “millimeter wave” (24Ghz to 100Ghz) spectrum for ultra-fast, but short-range 5G as well as Citizens Broadband Radio Service (CBRS), at about 3.5Ghz. This may lead new private networks that use LTE and 5G technology, especially for IoT applications.

“We will also see continued progress in opening up the 6GHz range for unlicensed Wi-Fi usage in the United States and the rest of world,” Oswal wrote.

As for 5G services, some will roll out in 2020 but “almost none of it will be the ultra-high speed connectivity that we have been promised or that we will see in future years,” Oswal said. “With 5G unable to deliver on that promise initially, we will see a lot of high-speed wireless traffic offloaded to Wi-Fi networks.”

In the long run, “In combination with the improved performance of both Wi-Fi 6 and (eventually) 5G, we are in for a large – and long-lived – period of innovation in access networking,” Oswal wrote.

IT’S A SD-WAN WORLD

“We are seeing a ton of momentum in the SD-WAN area as large numbers of companies need secure access to cloud applications,” Oswal said. The dispersal of connectivity – the growth of multicloud networking – will force many businesses to re-tool their networks in favor of SD-WAN technology, he said.

“Meanwhile the large cloud service providers, like Amazon, Google and Microsoft are connecting to networking companies – like Cisco – to forge deep partnership links between networking stacks and services,” Oswal wrote.

Oswal said he expects such partnerships will only deepen next year, and that concurs with recent analysis by Gartner.

“SD-WAN is replacing routing and adding application-aware path selection among multiple links, centralized orchestration and native security, as well as other functions. Consequently, it includes incumbent and emerging vendors from multiple markets (namely routing, security, WAN optimization and SD-WAN), each bringing its own differentiators and limitations,” Gartner wrote in a recent report.

In addition Oswal said SD-WAN technology is going to lead to a growth in business for managed service providers (MSPs), many more of which will begin to offer SD-WAN as a service.

“We expect MSPs to grow at about double the rate of the SD-WAN market itself, and expect that MSPs will begin to hyper-specialize, by industry and network size,” Oswal wrote.

ALL-INCLUSIVE MULTI-DOMAIN NETWORKS

In the Cisco world, blending typically siloed domains across the enterprise and cloud to the wide-area network is getting easier, and Oswal says that will continue in 2020. The idea is that its key software components – Application Centric infrastructure and DNA center – now enable what Cisco calls multidomain integration, which lets customers set policies to apply uniform access controls to users, devices and applications regardless of where they connect to the network.

ACI is Cisco’s software defined networking (SDN) data-center package, but it also delivers the company’s intent-based networking technology, which brings customers the ability to automatically implement network and policy changes on the fly and ensure data delivery.

WHAT IS SOFTWARE-DEFINED NETWORKING (SDN)?

DNA Center is a key package as it features automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise networks. Cisco DNA Center gives IT teams the ability to control access through policies using software-defined access (SD-Access), automatically provision through Cisco DNA Automation, virtualize devices through Cisco Network Functions Virtualization (NFV), and lower security risks through segmentation and encrypted traffic analysis.

“For better management, agility, and especially for security, these multiple domains need to work together,” Oswal wrote. “Each domain’s controller needs to work in a coordinated manner to enable automation, analytics and security across the various domains.”

The next generation of controller-first architectures for network fabrics allows the unified management of loosely coupled systems using APIs and defined data structures for inter-device and inter-domain communication, Oswal wrote. “The intent-based networking model that enterprises began adopting in 2019 is making network management more straightforward by absorbing the complexities of the network,” he wrote.

THE NETWORK AS SENSOR

The notion of the network being used for something more important than speeds and feeds has been talked about for a while, but the idea may be coming home to roost next year.

“With software that is able to profile and classify the devices, end points, and applications – even when they are sending fully encrypted data – the network will be able to place the devices into virtual networks automatically, enable the correct rule set to protect those devices, and eventually identify security issues extremely quickly,” Oswal wrote.

“Ultimately, systems will be able to remediate issues on their own, or at least file their own help-desk tickets. This becomes increasingly important as networks grow increasingly complex.”

Oswal said this intelligence could prove useful in wireless networks where the network can collect data on how people and things move through and use physical spaces, such as IoT devices in a business or medical device in a hospital.

“That data can directly help facility owners optimize their physical spaces, for productivity, ease of navigation, or even to improve retail sales,” Oswal wrote. “These are capabilities that have been rolling out in 2019, but as business execs become aware of the power of this location data, the use of this technology will begin to snowball.”

THE NETWORK ENGINEER CAREER CHANGE

The growing software-oriented network environment is changing the resume requirements of network professional.  “The standard way that network operators work – provisioning network equipment using command-line interfaces like CLI – is nearing the end of the line,” Oswal wrote. “Today, intent-based networking lets us tell the network what we want it to do and leave the individual device configuration to the larger system itself.”

WHAT IS INTENT-BASED NETWORKING?

Oswal said customers can now program updates, rollouts, and changes using centralized networking controllersrather than working directly with devices or their own unique interfaces.

“New networks run by APIs require programming skills to manage,” Oswal wrote.  “Code is the resource behind the creation of new business solutions. It remains critical for individuals to validate their proficiency with new infrastructure and network engineering concepts.”

Oswal noted that it will not be an easy change because retraining individuals or whole teams can be expensive, and not everyone will adapt to the new order.

“For those that do, the benefits are big,” Oswal said. “Network operators will be closer to the businesses they work for, able to better help businesses achieve their digital transformations. The speed and agility they gain thanks to having a programmable network, plus telemetry and analytics, opens up vast new opportunities.”

This year Cisco revamped some of its most critical notification and career-development tools in an effort to address the emerging software-oriented network environment. Perhaps one of the biggest additions is the new set of professional certifications for developers utilizing Cisco’s growing DevNet developer community.

The Cisco Certified DevNet Associate, Specialist and Professional certifications will cover software development for applications, automation, DevOps, cloud and IoT. They will also target software developers and network engineers who develop software proficiency to develop applications and automated workflows for operational networks and infrastructure.

Source: https://www.networkworld.com/article/3505883/cisco-5-hot-networking-trends-for-2020.html

FOR A FREE CONSULTATION, PLEASE CONTACT US.