Connect with us
Cloak And Track Your Affiliate Links With Our User-Friendly Link Cloaking Tool, Try It Free


How one major bank rebuilt its cloud



Cloud Computing News

The telecoms and banking sectors have a little more in common than you may think. While many analysts have posited a technological future where the two sectors converge, there is another area of similarity. They both have major core strength and significant user bases; yet both still run on a lot of legacy IT.

How one major bank rebuilt its cloud

Himanshu Jha (left) notes the commonalities, as well as the differences. He is in a good position to do so. Having spent a decade in the former, at Verizon and BT – ‘essentially building a new stack’, as he puts it – he has spent the last decade in the latter. Following a stint at Barclays where, among other things, he helped co-author the bank’s data strategy to underpin the wider business strategy as part of a newly formed data team, Jha has most recently served as cloud CTIO (chief technology and information officer) at TSB. A key factor was that, whereas previously his remit involved consuming cloud, now was a chance to build it.

So how does Jha assess the two sectors in which he is so well-versed? “Telecom is actually more complex than banking,” he explains. “Why? Because they have a big piece of the network and the provisioning. Their big part of business is the network and how well it functions, and that dictates the customer experience really. Pretty much everything else, there is a lot of commonality – a lot of legacy, a lot of mainframe, a lot of need to move from a batch mode to real-time.

“[That said] the sensitivity of what a bank does is extremely high, and in that case, when you have legacy [tech] for such a thing, it perhaps hurts you more,” adds Jha. “So the need to innovate in banks is much more than in telco.

“I would like to think that banks kind of spearhead the exploitation of technology, [with] more stress to innovate.”

For the role of cloud CTIO, the balance between leadership, strategic, and technological aspects needed to be struck. Yet much like at Barclays, the importance of marrying the cloud strategy with the business strategy – improving service, doing hyperpersonalisation, using analytics on cloud, as well as cost and efficiency – cannot be underestimated.

Jha notes the interconnections between all three disciplines. “There’s no point of leadership if you cannot develop a strategy and execute on it,” he explains. “And the strategy must be contextual. So at the same time we have to do our cloud strategy, it was part of the overall technology strategy to enable the business strategy – and there is no point investing any dollars if something doesn’t enable the business strategy.

“If we look at it a little bit more closely, data has been there, and cloud has now come through, and AI has come through still. If you look at these three things together, they are very synergistic,” says Jha. “What I mean by that is you can’t really explore data at scale if you don’t have the cloud offering and, equally, data is key for AI. Therefore, if you have to take advantage of the three waves, if you will, then you’ve got to have a solid cloud offering.”

What did this transformation look like at TSB? In broad terms, as Jha outlines, it was two-fold: building new and maintaining old enterprise cloud platforms and services; and modernising digital, data, and business applications and infrastructure through rearchitecting and replatforming to be cloud native variously.

A key step was to understand which blocks should be rebuilt in AWS versus Azure. The solution was that, with a couple of exceptions, the customer-facing applications would be on AWS while the colleague-facing applications were on Azure.

The latter included a virtual desktop – Jha cites Azure’s strength in this area – which was an example of replatforming; taking a set of microservices on-premises and replatforming it to be on IBM Cloud. “In essence, the actual code of these microservices didn’t change,” says Jha. “It was redeployment on a different cloud platform, thereby allowing better runtime and better cost management.” Azure Synapse Analytics was also used for the user-centric analytics piece.

“It’s pretty much fair to say [a lot of the decision is from] the goodness or badness of AWS or Azure, but you’ve got to understand the technology well enough to be able to make use of it for your solution, your context,” Jha explains.

But what about deciding between rearchitecting and replatforming? “If you have time, and if your existing applications on-prem really need rearchitecting, then go for it,” says Jha. “But I think at least to take the benefit of the OPEX value from the cloud platform, and less need to patch and upgrade, you could take the benefit straight away by replatforming.”

It can understandably be something of a minefield – and there are trapdoors which organisations can easily fall through. Jha cites education and cost as the two primary issues.

“I don’t think education is consistent,” he explains. “I’m not just talking tech – I mean often there is inconsistency in tech itself – but digital transformation cannot be achieved if your business owners are not talking the same language, and not working the same pace.” It is not so much the technical knowledge, but broader brush strokes; ‘that consistent understanding of where we are driving, how we are getting there, which technologies are in use, how it will really affect the outcome’ as Jha puts it.

Cost naturally dovetails into this. The C-suite, looking at the figures, will want the project completed as efficiently as possible. Yet this can lead to unforeseen costs. “Let’s say you’re on cloud successfully,” says Jha. “You’ve got to build in for an increase in investment of having larger capacity in the largest skill set. The way you deploy and develop on cloud is a very different skill set than you may have at present.

“Many times these programmes are pulled because expectations are a mismatch,” Jha adds. “There is a benefit of a cloud infrastructure when it is in a steady state, when you move all your workloads as much as you can. Then the denominator increases, so to say, and you get the benefit. But not when you have some Mickey Mouse edge use cases running. That’s when you don’t see the value of cloud.

“To get the full scale of cloud, it takes time,” says Jha. “You’ve got to build in not just the front cost, but the time to do a run. You will be on on-prem infrastructure for a period of time until the applications on cloud become stable.

“These two costs – I’ve seen it missing. People forget to budget it in.”

Jha is speaking at the Cloud Transformation Conference on February 15 where he will outline how to master cloud migration strategies for business growth and agility. The talk will draw on Jha’s experience, so expect similar life lessons to the above to feature. Yet if there is a conclusion, it is a muted – but important – one.

“I want people to learn – because that is how I have felt – that I hope they can be a little more real,” says Jha. “[They] come down from all of the reports and all the promises, and hype, and possibilities, and double down the thinking, or the education or collaboration, or the much more boring-sounding work, to understand that the vision will never be realised until you get down to the nuts and bolts.

“People get excited, but the excitement will never come until they double down and find out the nitty gritty on how to make it really work.”

Photo by Nicholas Cappello on Unsplash

Check out the upcoming Cloud Transformation Conference, a free virtual event for business and technology leaders to explore the evolving landscape of cloud transformation. Book your free virtual ticket to deep dive into the practicalities and opportunities surrounding cloud adoption. Learn more here.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address


Why Malia Obama Received Major Criticism Over A Secret Facebook Page Dissing Trump



Why Malia Obama Received Major Criticism Over A Secret Facebook Page Dissing Trump

Given the divisive nature of both the Obama and Trump administrations, it’s unsurprising that reactions to Malia Obama’s alleged secret Facebook account would be emotional. Many online users were quick to jump to former President Donald Trump’s defense, with one user writing: “Dear Malia: Do you really think that anyone cares whether you and/or your family likes your father’s successor? We’re all trying to forget you and your family.”

Others pointed out the double standard held by those who condemn Trump for hateful rhetoric but praise people like Malia who speak out against her father’s successor in what they believe to be hateful rhetoric. Some users seemed bent on criticizing Malia simply because they don’t like her or her father, proving that the eldest Obama daughter couldn’t win for losing regarding the public’s perception of her or her online presence. 

The secret Facebook situation is not all that dissimilar to critics who went after Malia for her professional name at the 2024 Sundance Film Festival. In this instance, people ironically accused Malia of using her family’s name to get into the competitive festival while also condemning her for opting not to use her surname, going by Malia Ann instead.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading


Best Practices for Data Center Decommissioning and IT Asset Disposition




Best Practices for Data Center Decommissioning and IT Asset Disposition

Data center decommissioning is a complicated process that requires careful planning and experienced professionals.

If you’re considering shutting down or moving your data center, here are some best practices to keep in mind:

Decommissioning a Data Center is More than Just Taking Down Physical Equipment


Decommissioning a data center is more than just taking down physical equipment. It involves properly disposing of data center assets, including servers and other IT assets that can contain sensitive information. The process also requires a team with the right skills and experience to ensure that all data has been properly wiped from storage media before they’re disposed of.

Data Centers Can be Decommissioned in Phases, Which Allows For More Flexibility

When you begin your data center decommissioning process, it’s important to understand that it’s not an event. Instead, it’s a process that takes place over time and in phases. This flexibility allows you to adapt as circumstances change and make adjustments based on your unique situation. For example:

  • You may start by shutting down parts of the facility (or all) while keeping others running until they are no longer needed or cost-effective to keep running.

  • When you’re ready for full shutdown, there could be some equipment still in use at other locations within the company (such as remote offices). These can be moved back into storage until needed again.

Data Center Decommissioning is Subject to Compliance Guidelines

Data center decommissioning is subject to compliance guidelines. Compliance guidelines may change, but they are always in place to ensure that your organization is following industry standards and best practices.

  • Local, state and federal regulations: You should check local ordinances regarding the disposal of any hazardous materials that were used in your data center (such as lead-based paint), as well as any other applicable laws related to environmental impact or safety issues. If you’re unsure about how these might affect your plans for a decommissioned facility, consult an attorney who specializes in this area of law before proceeding with any activities related to IT asset disposition or building demolition.

  • Industry standards: There are many industry associations dedicated specifically toward helping businesses stay compliant with legal requirements when moving forward with projects such as data center decommissioning.

  • Internal policies & procedures: Make sure everyone on staff understands how important it is not just from a regulatory standpoint but also from an ethical one; nobody wants their name associated with anything inappropriate!

Companies Should Consider Safety and Security During the Decommissioning Process

Data center decommissioning is a complex process that involves several steps. Companies need to consider the risks associated with each step of the process, and they should have a plan in place to mitigate these risks. The first step of data center decommissioning is identifying all assets and determining which ones will be reused or repurposed. At this point, you should also determine how long it will take for each asset to be repurposed or recycled so that you can estimate how much money it will cost for this part of your project (this can be done through an estimate based on previous experience).

The second step involves removing any hazardous materials from electronic equipment before it’s sent off site for recycling; this includes chemicals used in manufacturing processes like lead-free solder paste adhesives used on circuit boards made from tin-based alloys containing up 80% pure tin ingots stamped out into flat sheets called “pucks”. Once these chemicals have been removed from whatever device needs them taken off their surfaces then those devices can safely go through any other necessary processes such as grinding away excess plastic housing material using high pressure water jets until only its bare frame remains intact without any cracks where moisture might collect inside later causing corrosion damage over time due too much moisture exposure.

With Proper Planning and an Effective Team, You’ll Help Protect Your Company’s Future

Data center decommissioning is a complex process that should be handled by a team of experts with extensive experience in the field. With proper planning, you can ensure a smooth transition from your current data center environment to the next one.

The first step toward a successful data center decommissioning project is to create a plan for removing hardware and software assets from the building, as well as documenting how these assets were originally installed in the facility. This will allow you or another team member who may inherit some of these assets later on down the line to easily find out where they need to go when it’s time for them to be moved again (or disposed).

Use Professional Data Center Decommissioning Companies

In order to ensure that you get the most out of your data center decommissioning project, it’s important to use a professional data center decommissioning company. A professional data center decommissioning company has experience with IT asset disposition and can help you avoid mistakes in the process. They also have the tools and expertise needed to efficiently perform all aspects of your project, from pre-planning through finalizing documentation.

Proper Planning Will Help Minimize the Risks of Data Center Decommissioning


Proper planning is the key to success when it comes to the data center decommissioning process. It’s important that you don’t wait until the last minute and rush through this process, as it can lead to mistakes and wasted time. Proper planning will help minimize any risks associated with shutting down or moving a data center, keeping your company safe from harm and ensuring that all necessary steps are taken before shutdown takes place.

To Sum Up

The key to a successful ITAD program is planning ahead. The best way to avoid unexpected costs and delays is to plan your ITAD project carefully before you start. The best practices described in this article will help you understand what it takes to decommission an entire data center or other large facility, as well as how to dispose of their assets in an environmentally responsible manner.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading


Massive Volatility Reported – Google Search Ranking Algorithm Update



Google Logo Exploding Cracking

I am seeing some massive volatility being reported today after seeing a spike in chatter within the SEO community on Friday. I have not seen the third-party Google tracking tools show this much volatility in a long time. I will say the tracking tools are way more heated than the chatter I am seeing, so something might be off here.

Again, I saw some initial chatter from within the SEO forums and on this site starting on Friday. I decided not to cover it on Friday because the chatter was not at the levels that would warrant me posting something. Plus, while some of the tools started to show a lift in volatility, most of the tools did not yet.

To be clear, Google has not confirmed any update is officially going on.

Well, that changed today, and the tools are all superheated today.

Google Tracking Tools:

Let’s start with what the tools are showing:









Advanced Web Rankings:










Cognitive SEO:




So most of these tools are incredibly heated, signaling that they are showing massive changes in the search result positions in the past couple of days.

SEO Chatter

Here is some of the chatter from various comments on this site and on WebmasterWorld since Friday:

Speaking of, is anyone seeing some major shuffling going on in the SERPs today? It’s a Friday so of course Google is playing around again.

Something is going on.

Pages are still randomly dropping out of the index for 8-36h at a time. Extremely annoying.

Speaking of, is anyone seeing some major shuffling going on in the SERPs today? It’s a Friday so of course Google is playing around again

In SerpRobot I’m seeing a steady increase in positions in February, for UK desktop and mobile, reaching almost the ranks from the end of Sep 2023. Ahrefs shows a slight increase in overall keywords and ranks.

In the real world, nothing seems to happen.

yep, traffic has nearly come to a stop. But exactly the same situation happened to us last Friday as well.

USA traffic continues to be whacked…starting -70% today.

In my case, US traffic is almost zero (15 % from 80%) and the rest is kind of the same I guess. Traffic has dropped from 4K a day to barely scrapping 1K now. But a lot is just bots since payment-wise, the real traffic seems to be about 400-500. And … that’s how a 90% reduction looks like.

Something is happening now. Google algo is going crazy again. Is anyone else noticing?

Since every Saturday at 12 noon the Google traffic completely disappears until Sunday, everything looks normal to me.

This update looks like a weird one and no, Google has not confirmed any update is going on.

What are you all noticing?

Forum discussion at WebmasterWorld.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading