Galaxy Office Automation

Financial Organisation achieved cost saving by implementing Hyperconverged infrastructure

The Customer is one of world’s largest investment management companies in emerging market equities. Their offices are located in Australia, Brazil, Canada, China, Colombia, Hong Kong, India, Korea, Taiwan, the U.K., the United States and Vietnam. Headquartered in Seoul, South Korea, the firm manages approximately US$ 127 billion in assets globally through a diversified platform to offer market-leading franchises in traditional equity and fixed income products, ETFs and alternative strategies such as real estate, private equity and hedge funds. They are focused on providing equity and fixed income investment advisory services to mutual funds, foreign investment trusts, and institutions.

The Challenge

The customer had an aging inventory that needed refresh/renewals. They were running critical applications like MS Exchange & Active Directory on ageing infrastructure, and were looking for a design solution to help save on data center space, reduce number of software licenses, and lower their power/cooling costs. Thus, they were looking for a solution in line with the latest technology and priced within desired budget. However, their main concern was maximum up-time and ease of management, since they had very few IT resources.

The Solution

Galaxy team analyzed their existing infrastructure design, and suggested a hyper-converged architecture based on Dell EMC’s VxRail. This is a software-centric architecture that tightly integrates compute, storage, networking and virtualization resources as well as other technologies from scratch, supported by a single vendor. It primarily allows the integrated technologies to be managed as a single system through a common toolset.

VMware’s vSphere and vSAN virtualization solution was recommended to the customer. VMware’s Software Defined Storage [SDS] strategy is to evolve storage architectures through pervasive hypervisor, bringing the same kind of simplicity, efficiency, and cost-savings to storage systems that server virtualization has already brought to compute. Software-Defined Storage abstracts underlying storage through a virtual data plane, making a VM and thus the application, the fundamental unit of storage provisioning and management across heterogeneous storage systems.

By creating a flexible separation between applications and available resources, the hypervisor can balance all IT resources i.e. compute, memory, storage and networking, needed by an application.

VMware’s software-defined storage solutions enhance today’s data centers, by delivering:

  • Per-application Storage Services: SDS applies at the VM level, allowing storage services to be tailored to precise requirements of an application, without affecting smooth functioning of neighboring applications. Administrators are in complete control of which storage services, and therefore costs, are consumed by any given application.
  • Rapid changes to storage infrastructure: SDS uses a dynamic and non-disruptive model, just as in compute virtualization. IT administrators can precisely match application demand and supply at the exact time the resources are needed. Thus, there is room for flexibility in providing, allocating and re-allocating storage services on a need basis for each application.
  • Heterogeneous storage support: SDS lets you leverage existing storage solutions, such as SAN and NAS, or direct attached storage on x86 industry-standard hardware. With industry standard servers that are the backbone of Hyper-Converged Infrastructure, IT organizations can design low-cost and scalable storage environments that can easily adjust to specific and ever-changing storage needs.

The Benefits

Galaxy could bring significant cost-savings for the customer, that are linked to their regular business operations: –

  1. The hyper-converged architecture is compact, and its server and storage systems reside in the same box. Therefore, we do not require a separate SAN storage to be installed with additional complexity of external SAN switches and cables.
  2. This new solution also reduces their power consumption considerably, which was one of their pain areas earlier.
  3. On the VMware side, because we could pack more cores onto the same physical platform, we were able to reduce the license window substantially.

All this was provided within the allotted budget the customer had set aside for the inventory renewal.

To know more about the solution

Leading public sector bank secured data by upgrading Antivirus solution

Customer is a major public sector bank in India, with over 15 million customers across the length and breadth of the country.  They are headquartered in Pune. With a network of 1981 branches, customer is larger than many other public sector banks in the state of Maharashtra. They are into a wide range of services such as consumer banking, corporate and investment banking, private banking, insurance, mortgage loans, private equity, savings, securities, asset and wealth management.

The Challenge

The bank was using an outdated and a very old version of an antivirus solution to protect its data assets. They had over 15,000 licenses – many of which had to be updated and upgraded each year by installing patches. Thus they had to implement fresh licenses and also upgrade old licenses each year. Their incumbent vendor couldn’t handle this project efficiently and they had very poor support and services. Thus, the bank decided to look for a replacement vendor and protect their sensitive information using some of the latest security solutions for their organization and wanted to install and upgrade their Antiviruses.

The Solution

Although multiple competitors were ready to grab this opportunity and get into a pricing war with highly competitive quotations, Galaxy team instilled confidence in the bank’s IT team regarding our abilities and commitment to complete this project. We conducted a thorough analysis and assessment of customer’s business operations and vulnerability to threats and ransomware. They even had to reach out to, and interact with, employees from remote rural areas during this initiative. This had to be done with sensitivity to their work timings and care had to be taken not to disrupt ongoing business operations.  They had to explain about the solution or rather the need for a latest antivirus solution, and also deal with general lack of support and awareness regarding importance of this initiative.

Galaxy provided the latest and most secure antivirus solution from Symantec. We also offered them adequate training in terms of skill sets and resources, to manage all their branches across various regions on an ongoing basis.

The Benefits

After installation, the bank employees as well as other stakeholders are able to use all the latest features of the antivirus software which weren’t available before. These would include automatic scanning, as well as installing automatic upgrade patches from time to time for those branches connected to the internet.

For their branch offices with no internet connectivity such as in rural areas, Galaxy team together with bank’s internal IT team covers such offices regularly to ensure all hardware has latest patches installed.

Today, all their 1981 desktops across various branch offices are running up to date. They now have better productivity due to zero down time, no threats and attacks. This has enhanced their commitment towards using technology to further enable their business and retain their competitive position in industry.

To know more about the solution

Housing finance company reduces operations cost by implementing legacy infrastructure migration across remote locations

Customer is one of the largest housing finance companies in India having its registered and corporate office at Mumbai. Their main objective is to provide long term finance to individuals for purchase or construction of house or flat for residential purpose or to provide finance on existing property for businesses and professionals. The Company also provides long term finance to persons engaged in the business of construction of houses or flats for residential purpose and to be sold by them.

The Challenge

The customer has 7 regional offices, 21 back-offices and 240 marketing units across India. Most of the IT infrastructure used in their offices is obsolete and past its warranty period. Likewise, their legacy hardware was bulky and occupied more physical space. It was not just heavier to handle, shift or transport, but also running on more power. Thus, it was getting increasingly difficult for them to conduct day-to-day business operations in a speedy and efficient manner, using inventory that was well past its mandated shelf life. They were looking to replace this old inventory with latest hardware for office use.

Herein lay a greater challenge – this ageing inventory had to be replaced across 6 locations, some of which were in remote interiors in periphery of smaller towns across the country. The end users across these locations had a certain degree of comfort working on their old inventory and weren’t open to sudden disruption in their established patterns of hardware usage. They weren’t exactly aware of latest versions of desktops, and had an initial level of discomfort in adapting to them. Finally, this migration to newer inventory had to be done in harmony with their work timings, i.e. in such a way that their core operations continue to run smoothly without getting impacted.

Another challenge was lack of internet connectivity in remote areas, where their offices were located. In such locations, no Internet Service Provider [ISP] offers a direct service of fibre cable for broadband connectivity which is necessary for servicing enterprise clients. The workaround to this problem is cumbersome and needs dedicated attention.

The Solution

Galaxy team had a series of detailed discussions with all stakeholders in their IT ecosystem – right from decision makers to actual end users who work on those machines to conduct daily operations. We compiled a list of all likely issues and escalation tickets that may crop up during the customer’s transition to modern hardware.

We were in consultation with several OEMs to replace their legacy hardware, and finally proposed latest 8th generation desktops by Dell EMC. These are efficient as well as easy to use. They consume much lesser space and are light to carry. End users were trained on how to best leverage them, along with a list of “DOs & DONTs” to be kept in mind. We also helped them with safe and quick transfer of all their important business data from their legacy machines to new desktops.

Moreover, we also managed to secure a lucrative buy-back deal for the customer where they could avail of some price discounts by returning their old inventory to us. To address the other issues i.e. lack of internet connectivity via fibre cable at each remote location, we had to approach the local BSNL offices for support. BSNL provides leasing space to install a modem at their site, that is linked via RF connectivity to another modem installed in customer office, for data transfer over internet. Thus, for each remote location where customer operates, we had to identify, coordinate and interact with the local program manager from BSNL who oversees such operations and then deal with their support team to take this to closure.

The Benefits

Customer employees now use latest 8th generation model of desktops that are much more compact, efficient with ease of use and consume less power. All their old data was transferred without any business down time, with steps being taken to prevent data loss.  Customer is satisfied with the fact that we could help with this mass migration across remote locations, and the new model is both user-friendly and more productive. They have now engaged us with preventive yearly maintenance and also looking to approach us for future requirements.

To know more about the solution

Financial services group increases its ROI by implementing datacentre consolidation and business continuity solution

The customer is one of the largest financial services companies in India, and delivers a wide range of comprehensive financial solutions like Wealth management, Investment Banking, Corporate Finance Advisory, Brokerage Distribution, Commodities, Mutual funds, Corporate deposits, Bonds Loans to Institutions, Corporations, High-net-worth individuals and Families to its end clients. Our customer has its presence across India with international occupancy in Dubai, Honk Kong and New York City.

Business Challenges

One of the underlying aspects of successful IT ecosystem is its IT infrastructure and customer was dealing with an unstable IT infrastructure. Any major service interruption affects the whole system. Poor availability of IT infrastructure makes a company vulnerable to cyber threats, network failures and outages. Updating a system may require calling on resources at all levels, which is not only time consuming but also expensive to scale, thus our customer were in urgent need of Tech refresh to maximize its business operations while decreasing operational cost.

Being a leading firm in financial services meant dealing with massive amount of data. The next challenge that our customer was dealing with was storing and accessing huge data in the data center.

Desired Business Outcome

Our customer wanted to build a flexible organization solution that can quickly respond to changes in marketplace and reacts successfully to the sudden shifts in overall market conditions. They wanted an Agile IT solution to meet increasing customer demands and increase overall team productivity.

Data centers are a critical component of stable IT infrastructure, virtually organizations fully rely upon data centers to provide information without latency. Operating a data center is costly and so our customer wanted to consolidate data center in efforts to reduce operational costs. Achieving this will benefit the organizations in two folds. Data center consolidation eliminates the potential cyberattacks and also reduces the number of access points in a network. Grouping large amount of servers into a smaller structure, will lower their operational cost and lower their IT footprint.

Customers value the traditional client-broker relationship but advancement changes in the brokerage industry has attracted them to shift from traditional trading to online trading. Our customer wanted to look beyond the traditional Stock Broking services by introducing new services like offering extended 24 hour assistance or providing an internet based or mobile based application to reach out to more and more customers.

Solutions

Galaxy Professional Services for Hardware Installation in collaboration with VMware for Implementation and Data Migration Services, provided a custom configured solution by using 12 Dell PowerEdge R740 Servers. This helped them maximize uptime and reduce IT efforts so that the focus was shifted to bigger priorities rather than routine maintenance. R740 server leverages new security features built into PowerEdge to strengthen protection so the data delivered to the customer is reliable and secure no matter where they are. Its main aim is to reduce IT complexity, eliminate inefficiencies and lower costs by making IT solutions work harder for the customers.

We also installed 8 Dell PowerEdge R440 Servers to provide high performance compute which is easy to scale and automate.

To know more about the solution

Container Adoption Trends: Why, How and Where

Container Adoption Trends: Why, How and Where

Benchmark your application strategy with data. Read this ASR survey of IT decision makers about adoption of containers and Kubernetes.

Application containerization—packaging software to create a lightweight, portable, consistent executable—delivers technical and business advantages over conventional delivery methods. Containerized apps are quickly deployable for easy scaling, run in diverse environments and offer security advantages thanks to their isolation from other software. In combination with orchestration software such as Kubernetes, containers can also be centrally dispatched, managed and scaled for IT agility.

In September 2021, Dell commissioned Aberdeen Strategy and Research (ASR) to survey hundreds of IT decision makers with experience in choosing or deploying containers. The goal was simple, to better understand how and why containers and Kubernetes are being deployed at mid-size as well as larger enterprises, assess container-related performance advantages and uncover challenges associated with Kubernetes and container environments. The survey found that on average over 50% of applications are containerized.

Among the use cases for container adoption highlighted in the results are the expected drivers of application development and testing. Other interesting drivers include server consolidation, multi-cloud capability and automating the pipelines from application code to production environments. Interestingly, the survey highlighted the fact that the deployment of third-party applications and services is cited as a driver more frequently than the in-house development of custom applications. Even for organizations that do little more than tie together existing applications with lightweight scripts or use off-the-shelf applications, containerization offers logistical benefits.

It should be no surprise that security, time-to-market, improved deployment capabilities and driving efficiencies are cited as key drivers by respondents to this survey. Also, some common inhibitors to adoption were cited including enabling technology that is too complex to justify the effort, uncertainty around security capabilities, lack of internal know-how and fear of spiraling costs.

Application deployment trends found by the survey show that while container adoption is widespread, virtual machines continue to lead as the deployment mechanism for applications. This points to the need for a pragmatic approach to enterprise architectures that assumes the co-existence of VMs and containers for the foreseeable future. Furthermore, organizations cited the strong need for support for both public cloud and private cloud deployment options with a hybrid approach being pursued by over two-thirds of surveyed organizations.

Original research like this is a great way to benchmark how your IT strategy aligns with industry trends. Please read the executive summary of the results and also reference the infographic summarizing how Dell Technologies and VMware solutions provide a pragmatic approach for container adoption.

Author: Bob Ganley, Dell Technologies Cloud Product Group

Source: https://www.dell.com/en-us/blog/container-adoption-trends-why-how-and-where/

FOR A FREE CONSULTATION, PLEASE CONTACT US.

Galaxy Recognized as Dream Company to Work For by HRD Congress

Galaxy Recognised By The World HRD Congres As One Of The Dream Companies To Work For Inder IT/ITES Category

We are a leading technology solutions provider that helps organisations to digitally transform their business. With PAN India presence, supported by 200+ certified committed professionals, we design and implement IT infrastructure solutions to deliver cost-effective, agile and scalable solutions to meet our customer’s present as well as futuristic needs. Recently, we have been recognized by the World HRD Congress as one of the Dream Companies to work for under IT/ITES category and were awarded this prestigious award at the 30th Edition of the World HRD Congress & Awards in Mumbai on 23rd March 2022.

The World HRD Congress recognizes organizations who have demonstrated excellence and innovation in the field of IT/ITES. The goal of the World HRD Congress is to provide a platform showcasing dream companies that individuals can work for in various industries. The nominations are evaluated by an eminent jury comprising of senior professionals based on pre-defined criteria and go through a rigorous six-step process from receiving the entries to the final rankings and includes a presentation by the short-listed companies on innovative HR practices, company values, work culture, CSR and more. You can check out the link for more details about the ranking and awards at http://dreamcompaniestoworkfor.org .

We received the award for ensuring employee happiness & satisfaction along with job security and clear road maps and avenues for growth. We have always strived to provide an environment for innovativeness where everyone has a responsibility and ownership to continuously improve what they are doing.

While expressing pride and happiness over the recognition, Mr. Anoop Pai Dhungat, Managing Director, stated “This is a important milestone for us and we will continue to invest our management time and focus on creating a highly committed workforce and delivering great value to our customers. We strive to keep up the good work by our HR team and continue to improve our workplace culture for the future and move towards being a great organization.”

Looking ahead in line with the Company growth story we are looking at a overall headcount growth rate of 20 percent during the year. We also believe in selecting talent from campus and grooming them in various areas of technology and operations. Each year over the past three years, this has been one of our areas where our hiring has focused on.

The Five R’s Of Application Modernization

The Five R's Of Application Modernization

Most organizations realize that application modernization is essential in order to thrive in the digital age, but the process of modernizing can be highly complex and difficult to execute. Factors such as rapidly growing application volume, diversity of app styles and architectures, and siloed infrastructure can all contribute to the challenging nature of modernization. To add to this complexity, there are multiple ways to go about modernizing each individual application. Depending on business and technical goals, you may opt to lift-and-shift some apps, while containerizing or even refactoring others. Either path then results in varying degrees of time commitments, app performance, and ultimately, comparing the level of effort needed to meet an organization’s anticipated return on investment.

THE FIVE R’S

The Five R’s are a set of common modernization strategies that organizations can use when moving applications to modern infrastructure and cloud native application platforms. The first step to efficiently modernizing your application portfolio is to determine the best strategy for each app based on business needs and technical considerations (e.g., how much effort will be involved in modernizing the application and the target infrastructure platform for the app).

Refactor
Refactoring refers to making significant source code changes to the application (rewriting the application or service), typically using cloud native technologies such as microservices and application programming interfaces (APIs)s. While the process can be complex and laborious, this strategy actually provides the most benefit for high-value systems and applications that require frequent updates and innovation.

Replatform
Replatforming involves containerizing an application and moving it to a Kubernetes-based platform. There may be small code changes needed to take advantage of the new environment. This strategy is commonly implemented when moving applications running on virtual machines (VMs) to container-based apps running on a modern app platform or public cloud infrastructure.

Rehost
Rehosting refers to changing the infrastructure or operation of an application without changing the application itself. This is often done to gain the cost benefits of the cloud when the rate of change to an application is low and wouldn’t benefit from refactoring or replatforming.

Retain
Retaining involves optimizing and retaining an application as-is. This strategy might be used when there is data that can’t be moved, or a modernization that can be postponed.

Retire
Retiring is when a traditional application is no longer used or replaced with an off-the-shelf software-as-a-service (SaaS) offering.

THE RELATIONSHIP BETWEEN TIME AND VALUE IN YOUR APP MODERNIZATION STRATEGY

In most cases, the higher the business value of an application, the greater potential benefit there is to undergo more change. By refactoring primarily business-critical and high-value apps, you can maximize your team’s precious time while prioritizing the applications that have the most to gain from more flexible architectures and scalable infrastructure. Applications that remain unchanged for long periods of time and don’t hinder your company’s ability to innovate don’t need to be rewritten. When the goal is to increase IT efficiencies and decrease IT costs for apps requiring infrequent updates, you’ll be better off rehosting or replatforming these applications.

HOW TO ASSESS AND DISPOSITION YOUR PORTFOLIO

The main factors that play a critical role in a successful and actionable modernization strategy fall into three categories: technical, business, and organization/people. VMware helps organizations jumpstart app portfolio modernization by analyzing and prioritizing these considerations and more through service engagements like VMware App Navigator in our Rapid Portfolio Modernization program. By assessing and dispositioning your application portfolio, you can determine which of the Five R’s will be the best course of action for each of your apps.

For technical factors, consider variables such as application framework and runtime, architecture design, dependencies, and integrations. Tools such as Application Transformer for VMware Tanzu and our Cloud Suitability Analyzer can help streamline this discovery and analysis. For business factors, consider elements like business criticality, licensing costs, and time-to-market factors. For organizational and people factors, consider domain expert availability, organizational and team structure, and calendar dependencies.

Ultimately, there are lots of facets to consider when deciding the best course of action for each application in your portfolio. But, by leveraging this framework with VMware as your partner, you can standardize and simplify your strategy to efficiently assess and disposition your portfolio.

LANDING ZONES

Once you have determined which apps you want to refactor, replatform, and rehost, where do these apps go after they’re modernized? We call the new target infrastructure “landing zones,” which may include some combination of on-premises, public cloud(s), Kubernetes, VMs, platform as a service (PaaS), and bare metal. Because of the dynamic nature of applications and the complexities of enterprise IT budgets, choosing the right landing zones is rarely as simple as just identifying the least expensive option.

To determine the best landing zones for your apps, consider factors like data gravity, developer experience, potential cloud exit strategies, and implications to the mainframe.

HOW TO GET STARTED

We’ve established what the Five R’s are, the relationship between effort to change and expected value in app modernization, app disposition strategies, and how to decide on the right landing zones. But how do you get started on this app modernization path? Here’s a guideline:

Get Buy-in: make sure all the stakeholders for an application are brought into the modernization effort.

Set Expectations: provide as much visibility as possible into the time and effort that a modernization project will require. Avoid over-promising and under-delivering.

Restructure when Needed: prepare for your organizational structure to evolve as modernization efforts advance. Pay attention to how other companies have organized, but don’t just assume the same approach will work for you.

Prioritize Your Portfolio: analyze your applications and divide them under the Five R’s: refactor, replatform, rehost, retain, retire.

Look for Patterns in Your Portfolio: identify commonalities among your applications, looking for architectural technical design similarities.

Choose the Right Starting Point: pick one or a few small(ish) projects that will help you start on the right foot in terms of building skill, momentum, or both. Or, focus on one or a few groups of similar applications, selecting a representative application in each group to start with.

Make Smart Technology Decisions: don’t choose a set of technologies simply because it’s what the “cool kids” are using. Make sure your choices are right for your organization.

Break Down Monoliths: plan carefully to decompose monolithic applications into more manageable pieces without worrying about satisfying any cloud native purity tests.

Pick Platforms Pragmatically: base cloud and platform choices on the needs and capabilities of your organization.

Interested in following this guideline? VMware’s Rapid Portfolio Modernization program brings automated tooling and proven practices to execute upon each of these steps in a seamless and effective way.

Ultimately, the best app modernization path is one that aligns with your business goals, can produce results quickly, and is agile enough to evolve along with demands. The Five R’s provide you with a framework to best disposition your apps in a way that reduces the overwhelming nature of app modernization.

Want to learn more about how to kickstart your application modernization efforts? Check out our eBook A Practical Approach to Application Modernization.

Author: VICTORIA WRIGHT

Source: https://tanzu.vmware.com/content/blog/the-five-rs-of-application-modernization

FOR A FREE CONSULTATION, PLEASE CONTACT US.

Three Ways To Optimize Your Edge Strategy

Three Ways To Optimize Your Edge Strategy

Enterprises can use these methods to move from proof-of-concept to a production edge platform that delivers a competitive advantage.

In enterprise IT circles, it’s hard to have a conversation these days without talking about edge computing. And there’s a good reason for this. “The edge” is where businesses conduct their most critical business. It is where retailers transact with their customers. It is where manufacturers produce their products. It is where healthcare organizations care for their patients. The edge is where the digital world interfaces with the physical world – where business critical data is generated, captured, and, increasingly, is being processed and acted upon.

This isn’t just an anecdotal view. It’s a view backed up by industry research. For example, 451 Research forecasts that by 2024, 53% of machine- and device-generated data will initially be stored and processed at edge locations. IDC estimates that, by 2024, edge spending will have grown at a rate seven times greater than the growth in spending on core data center infrastructure. In a word, this kind of growth is enormous.

WHY EDGE?

What’s behind the rush to the edge? The simplest answer to that question is that business and IT leaders are looking for every opportunity they can find to achieve a competitive advantage. Eliminating the distance between IT resources and the edge achieves several different things:

  • Reduced latency– Many business processes demand near real-time insight and control. While modern networking techniques have helped to reduce the latency introduced by network hops, crossing the network boundaries between edge endpoints and centralized data center environments does have some latency cost. You also can’t cheat the speed of light, and many applications cannot tolerate the latency introduced by the physical distance between edge endpoints and centralized IT.
  • Bandwidth conservation– Edge locations often have limited WAN bandwidth, or that bandwidth is expensive to acquire. Processing data locally can help manage the cost of an edge location while still extracting the maximum business value from the data.
  • Operational technology (OT) connectivity– Some industries have unique OT connectivity technologies that require specialized compute devices and networking in order to acquire data and pass control information. Manufacturing environments, for example, often leverage technologies such as MODBUS or PROFINET to connect their machinery and control systems to edge compute resources through gateway devices.
  • Business process availability– Business critical processes taking place in an edge location must continue uninterrupted – even in the face of a network outage. Edge computing is the only way to ensure a factory, warehouse, retail location, or hospital can operate continuously and safely even when it is disconnected from the WAN.
  • Data sovereignty– Some industries and localities restrict which data can be moved to a central location for processing. In these situations, edge computing is the only solution for processing and leveraging the data produced in the edge location.

As companies implement edge computing, they are moving IT resources into OT environments, which are quite different from the IT environments that have historically housed enterprise data. IT teams must adapt IT resources and processes for these new environments.

Let’s talk about the state of many edge implementations today and how to optimize your path forward.

MOVING BEYOND PROOFS OF CONCEPT (POCS)

The process of implementing and operating edge computing isn’t always straightforward. Among other things, edge initiatives often have unclear objectives, involve new technologies, and uncover conflicting processes between IT and OT. These challenges can lead to projects that fail to move from the proof-of-concept stage to a scalable production deployment.

To help organizations address these IT-OT challenges, the edge team at Dell Technologies has developed best practices focused on moving edge projects from POCs to successful production environments. These best practices are derived from our experience enabling IT transformation within data center environments, but they are adapted to the unique needs of the edge OT environments. To make this easy, we have distilled these best practices down to three straightforward recommendations for implementing edge use cases that can scale and grow with your business.

  1. Design for business outcomes.

Successful edge projects begin with a focus on the ultimate prize — the business outcomes. To that end, it’s important to clearly articulate your targeted business objectives upfront, well before you start talking about technology. If you‘re in manufacturing, for example, you might ask if you want to improve your production yields or to reduce costs by a certain amount by proactively preventing machine failure and the associated downtime.

Measuring results can be difficult when you are leveraging a shared infrastructure, especially when you are trying to look at the return on investment. If your project is going to require a big upfront investment with an initial limited return, you should document those business considerations and communicate them clearly. Having specific business goals will enable you to manage expectations, measure your results as you go, and make any necessary mid-course corrections.

  1. Consolidate and integrate.

Our second recommendation is to look for opportunities to consolidate your edge, with an eye toward eliminating stove-piped applications. Consolidating your applications onto a single infrastructure can help your organization realize significant savings on your edge computing initiatives. Think of your edge not as a collection of disconnected devices and applications, but as an overall system. Virtualization, containerized applications, and software-defined infrastructure will be key building blocks for a system that can enable consolidation.

Besides being more efficient, edge consolidation also gives you greater flexibility. You can more easily reallocate resources or shift workloads depending on where they are going to run the best and where they are going to achieve your business needs. Consolidating your edge also opens opportunities to share and integrate data across different data streams and applications. When you do this, you are moving toward the point of having a common data plane for your edge applications. This will enable new applications to easily take advantage of the existing edge data without having to build new data integration logic.

As you consolidate, you should ensure that your edge approach leverages open application programming interfaces, standards, and technologies that don’t lock you into a single ecosystem or cloud framework. An open environment gives you the flexibility to implement new use cases and new applications, and to integrate new ecosystems as your business demands change.

  1. Plan for growth and agility.

Throughout your project, all stakeholders must take the long view. Plan for your initial business outcomes, but also look ahead and plan for growth and future agility.

From a growth perspective, think about the new capabilities you might need, and not just the additional capacity you are going to need. Think about new use cases you might want to implement. For example, are you doing some simple process control and monitoring today that you may want to use deep learning for in the future? If so, make sure that your edge infrastructure can be expanded to include the networking capacity, storage, and accelerated compute necessary be able to do model training at the edge.

You also must look at your edge IT processes. How are your processes going to scale over time? How are you going to become more efficient? And how will you manage your applications? On this front, it makes sense to look at the DevOps processes and tools that you have on the IT side and think about how those are going to translate to your edge applications. Can you leverage your existing DevOps processes and tools for your off-the-shelf and custom edge applications in your OT environment, or will you need to adapt and integrate them with the processes and tools that exist in your OT environment?

A FEW PARTING THOUGHTS

To wrap things up, I’d like to share a few higher-level points to consider as you plan your edge implementations.

Right out of the gate, remember that success at the edge depends heavily on having strong collaboration between your IT stakeholders and your OT stakeholders. Without that working relationship, your innovations will be stuck at the proof-of-concept stage, unable to scale to production, across processes, and across factories.

Second, make sure you leverage your key vendor relationships, and use all the capabilities they can bring to bear. For example, Dell Technologies can help your organization bring different stakeholders within the ecosystem together through the strong partnerships and the solutions that we provide. We can even customize our products for particular applications. Talk to us about our OEM capabilities if you have unique needs for large edge applications.

Finally, think strategically about the transformative power of edge, and how it can give you a clear competitive advantage in your industry. But always remember that you are not the only one thinking about edge. Your competitors are as well. So don’t wait to begin your journey.

Author: Philip Burt, Product Manager-edge strategy, Dell Technologies.

Source: https://www.dell.com/en-us/blog/three-ways-to-optimize-your-edge-strategy/

FOR A FREE CONSULTATION, PLEASE CONTACT US.

5 Enterprise Tech Predictions Following An Unpredictable Year

5 Enterprise Tech Predictions Following An Unpredictable Year

We all went into 2020 with a plan. Those plans were rendered irrelevant just a few months into the year. Organizations quickly rolled out contingency plans and put non-essential initiatives on hold. This may lead one to believe that 2020 was a wash for technology innovation. I would argue otherwise. In fact, organizations deployed inspired solutions to tackle considerable challenges.

Here are a few observations from 2020 and five enterprise tech predictions for 2021.

The Edge Is the New Frontier for Innovation

Amazing things are happening at the edge. We saw that on full display in 2020. Here are a few examples:

  • When the pandemic first hit, a lab testing company rolled out 400 mobile testing stations across the United States in a matter of weeks.
  • A retailer relocated their entire primary distribution center, which was in a state under stay-at-home orders, to fulfill an influx of e-commerce orders from a new location.

These organizations used existing edge investments to react and innovate with velocity. And in the year ahead, we will continue to see prioritized investment at the edge.

Network reliability and performance directly impacts employee and customer experience. That alone led to expansive SD-WAN rollouts at the edge and in-home offices. Simple SaaS-delivered solutions (inclusive of hardware) will further improve security and user experience wherever employees choose to work. And this will start a trend in which these solutions become the norm.

Additionally, I expect organizations to increasingly adopt a secure access service edge (SASE) solutions. Legacy network and security architectures create unnecessary hair pinning and performance degradation. Instead, our future will lie in application and infrastructure services that are defined in software and deployed and managed as software updates. While upending legacy procurement processes along the way, organizations will dramatically improve performance and security.

We are also getting far more intelligent at the edge, with the ability to learn, react and optimize in real-time. Furthermore, we are seeing new opportunities for infrastructure consolidation at the edge, reducing the number of specialized appliances required to meet technology needs. This is an exciting development as it opens doors for cost-positive solutions where you improve automation, safety, and efficiencies, while simultaneously reducing costs.

Decentralization of Machine Learning

Staying at the edge for another moment, let’s talk about federated machine learning (FML). We are starting to see early uptake in this area among businesses. Across all industries, organizations are innovating to make better data-driven decisions, while leveraging highly distributed technology footprints.

With compute capacity practically everywhere, federated learning allows organizations to train ML models using local data sets. Open source projects, such as FATE and Kubeflow, are gaining traction. I expect the emergence of intuitive applications on these platforms to further accelerate adoption.

Early ML solutions disproportionately benefited a small percentage of enterprises. These organizations had mature data science practices already in place. ML adoption continues to pick up pace. And that acceleration is driven by turnkey solutions built for “everyone else.” These are enterprises that want to reap the rewards of ML without having to make large investments in data science teams—often a difficult challenge given the industry shortage of data scientists today.

Renewed Momentum for Workplace 2.0 Initiatives

The pandemic brought renewed momentum for many Workplace 2.0 initiatives. I’m especially interested in augmented reality (AR) and virtual reality (VR) use cases.

AR and VR are gaining traction, especially in use cases like employee training, AR-assisted navigation (such as on corporate campuses), and in online meetings. This year, I had the opportunity to participate in a VR meeting. The cognitive experience was quite fascinating. While on a Zoom meeting, it’s quite obvious that you’re on a video call. But after a few minutes in a VR meeting, you start to feel like you are actually in the room together.

There’s still work to be done to drive mainstream adoption. 2021 will see gains in the adoption of AR and VR, aided by advancements in enterprise-class technologies that address security, user experience, and device management of these solutions.

That said, the biggest gap for VR, in my opinion, is that there is not an equivalent to Microsoft PowerPoint for VR. In other words, in the future, I want to be able to quickly create 3D content that can be consumed in a VR paradigm. Today, there simply is not an easy productivity tool that would allow anyone to quickly create rich 3D content that takes full advantage of the 360-degree panorama afforded by VR. I expect this to be an area of focus for AR and VR technologists moving forward.

Continued Evolution of Intrinsic Security and Data Protection

Innovations in the security space brought intrinsic security from what some called a marketing buzzword into something real.

For instance, today one can leverage virtualization technologies to secure a workload at the moment it is powered on, prior to even an operating system being installed. That is intrinsic security by definition and represents a major step forward from the traditional security model.

In 2021, security will once again be amongst the top technology investments for the year, with both ransomware and security at the edge getting increased attention. Sophisticated ransomware attacks are not just targeting data, but also data and system backups. This creates the potential that even system restores are compromised.

We need to change how we protect systems and data. We need to fundamentally rethink what it means to back up and recover systems. Legacy solutions with static protection and recovery approaches will start facing the potential for disruption as the year progresses.

When we look at the edge, a growing number of technology decisions are being made by the lines of business—sometimes even at a local level—and not central IT. This has long created challenges as smart and connected devices are deployed at edge sites faster than traditional IT processes. While we should always strive toward deploying compliant solutions, we need to accept the fact that business velocity and agility requirements can be in conflict.

To that end, we must look at technologies that offer broader discovery of connected systems at the edge and provide adaptive security policy enforcement for those systems. Instead of fighting the battle for control, security leaders must accept there is some degree of chaos and innovate with the expectation of chaos as opposed to outright control.

Applying New Technologies to Old Challenges

In 2021, what’s old may be new again—at least in taking another look at how new technologies can help solve old challenges.

For example, in the area of sustainable computing, there is a lot of energy efficiency to be gained in the traditional data center. VMware currently has an xLabs project to help our customers optimize hot and cool aisle spaces in their data centers. Early studies revealed a promising amount of energy efficiency can be gained through platform-driven data center heat management.

Additionally, machine learning may soon help improve accessibility. Earlier this month, we announced a project spearheaded by VMware technologists to help developers conduct better automated accessibility testing with machine learning. This project will make it easier for organizations to meet accessibility standards, while reducing costs for the software they build.

2020 was a year of determining progress. Unforeseen challenges taught us to plan and architect for the expectation of change. And we must be resilient to adapt to new ways of living and working.

2021 ushers in hope as we navigate whatever our new normal will be. And I’m excited to see how that new normal will be shaped by advancements in technology.

Author: Chris Wolf, VP, Advanced Technology Group, VMware

Source: https://www.vmware.com/radius/5-enterprise-tech-predictions-2021/

FOR A FREE CONSULTATION, PLEASE CONTACT US.

Multi-Cloud: Strategy Or Inevitable Outcome? (Or Both?)

Multi-Cloud: Strategy Or Inevitable Outcome? (Or Both?)

Multi-cloud is top of mind for many technology leaders. What are the benefits? The challenges? And ultimately, is it a right fit for the business and its teams? There’s no consensus about multi-cloud—as evidenced in a recent Twitter thread I started. So, let’s break down why. I’ll share my view of multi-cloud and possible approaches to cloud strategy implementation without (yet) getting into how VMware speeds your cloud journey. Then, let’s hear about yours. There are lots to cover about strategy, so I’ll start with definitions.

What is Multi-Cloud?

One of the biggest challenges that surfaced in the discussion of multi-cloud is that we all have slightly different definitions of multi-cloud.

First, as a starting point, a commonly agreed-upon definition of hybrid cloud:

Hybrid cloud: consistent infrastructure and operations between on-premises virtualization infrastructure/private clouds and public cloud.

Hybrid cloud is about connecting on-premises environments and public cloud.  The key distinguishing characteristic of a hybrid cloud is that the infrastructure (and thus operations) is consistent between on-prem and cloud. This means the same operational tools and skillsets can be used in both locations and that applications can easily be moved between locations as no modifications are needed.

Now to define multi-cloud:

Multi-cloud: running applications on more than one public cloud.

This definition could mean a single application/app that is stretched across clouds but more often means multiple apps on multiple clouds (but each app is contained entirely on a single cloud). It could mean that the underlying cloud is partially or completely abstracted away, or it could mean that the full set of cloud capabilities is available to the apps.  Perhaps confusingly, multi-cloud can include on-premises clouds too!  This is just a generalization of the “many apps in many locations” definition.

Multi-Cloud Approaches

Having apps running on multiple clouds presents challenges: How do you manage all these apps, given the vastly different tooling and operational specifics across clouds? How do you select which cloud to use for which apps? How do you move apps between clouds? How much of this do you want to expose to your developers?

There are a variety of approaches to multi-cloud that offer different trade-offs to the above problems.  I see four primary approaches businesses are taking to multi-cloud:

  • No Consistency: This is the default when a business goes multi-cloud. Each cloud has its own infrastructure, app services (e.g., database, messaging, and AI/ML services), and operational tools.  There is little to no consistency between them and the business does nothing to try and drive consistency.  Developers must build apps specifically for the cloud they’re using.  Businesses will likely need separate operations teams and tooling for each cloud.  But apps can take full advantage of all the cloud’s capabilities.
  • Consistent Operations: The business aligns on consistent operations and tooling (e.g., governance, automation, deployment and lifecycle management, monitoring, backup) across all clouds, each with its unique infrastructure and app services.  Developers still build apps to the specifics of the cloud and moving apps between clouds is still a large amount of work, but the business can standardize on an operational model and tooling across clouds.  This can reduce the cost of supporting multiple clouds through consolidated operations teams with less tooling and increase app availability through common, well-tested, and mature operational practices.
  • Consistent Infrastructure: The business leverages a consistent infrastructure abstraction layer on top of the cloud.  Kubernetes is a common choice here, where businesses direct their developers to use clouds’ Kubernetes services.  VMware Cloud is another option, as it’s the consistent VMware SDDC across all clouds.  Common infrastructure standardizes many parts of the app, allowing greater portability across clouds while still leveraging the common operational model (which is now more powerful as the infrastructure has been made consistent!).  Developers can still take advantage of each cloud’s app services though, which is where some cloud stickiness can creep in.
  • Consistent Applications: The business directs its developers to use consistent infrastructure abstraction and non-cloud-based app services for their apps.  This builds on Consistent Infrastructure by also specifying that any app services used must not come directly from the cloud provider.  Instead, app services can be delivered by ISVs (e.g., MongoDB, Confluent Kafka) as Kubernetes operators or as a SaaS offering (e.g., MongoDB Atlas, Confluent Cloud).  Apps are now easily portable across clouds and cloud selection is totally predicated on cost, security, compliance, performance, and other non-functional considerations.

It’s important to note that no approach is generally “better” than any of the others.  Each approach comes with tradeoffs and it’s up to the business to decide which is best for it based on its unique needs and requirements.  And in some cases, businesses may leverage more than one approach, with different apps or development teams taking different approaches.

Strategy or Inevitable Outcome?

The natural next question is whether you should go multi-cloud.  In an ideal world, running all your apps on a single cloud is likely best for most businesses.  You can standardize everything you’re doing to that one cloud, simplifying app implementation and operations.  Apps can take advantage of all the specific innovative features of that cloud.  You can negotiate higher discounts with the cloud provider because you have higher usage than you would if you spread your workloads over many different clouds.

The problem, though, is that it’s very hard to run all apps in only one cloud.  Acquisitions may be using a different cloud.  After the acquisition closes, the question is then whether to move all the apps onto a single cloud (likely using precious time and resources that could be invested in integrating that acquisition) or living with multi-cloud.  Shadow IT is still happening in many businesses, where developers or lines of business make independent decisions to use another cloud technology, meaning you’ll likely end up in a multi-cloud situation even if you try to avoid it.  Finally, even if you can deal with those problems, what if your preferred cloud isn’t innovating in a new area as fast as another cloud?  It may be necessary for the business to start using that other cloud, putting you into a multi-cloud world.

The general takeaway is, try as hard as you might, being a single cloud likely won’t be the case for very long.  Something will happen to make you go multi-cloud.  Really, it’s just a question of whether it’s due to a proactive strategy or an inevitable outcome because of one or more of the above reasons.  In either case, having a multi-cloud plan is a must!

Author: Kit Colbert, VP & CTO, Cloud Platform BU at VMware

Source: https://octo.vmware.com/multi-cloud-strategy-or-inevitable-outcome-or-both

FOR A FREE CONSULTATION, PLEASE CONTACT US.