Experimenting with business models

Working with a niche market meant limited customers, extensive groundwork, and cutthroat competition. Working on WebNMS, our focus was on high-touch engagement. First, we'd generate leads by understanding their structure: who the decision-makers were, who would sign the purchase orders, etc. Then we'd find ways to contact leads. This could be through calls, emails, brochures, or even attending events where we might bump into them—a strategy followed by most businesses in the market at the time. Ultimately, we understood that this was not a sustainable method of operation for us.

With ManageEngine, we shifted our focus on leveraging the internet for new product development, marketing, sales, and customer support models. We digitally engaged our audience through online ads, social channels, online stores, offered support through online forums, etc.

We were also focused on emerging and disruptive technologies, so we were quite early in terms of having a browser-based user interface (BUI), which radically changed the way users interacted with our applications.

Enterprise software at the time was generally complex and bloated. Commissioning it required not only buying multiple hardware components, but also many software components, putting them together to make things work. ManageEngine, on the other hand, made the software part easy by providing a single, self-sufficient software package that could be run on small footprint hardware like a desktop or laptop. This model, again, was a bit disruptive for those times.

Investing in R&D

Zoho's mission is to invest in long-range R&D and bring a deep-rooted R&D culture to Indian companies. Our belief is that customers benefit when we spend the majority of our budget in R&D as opposed to sales and marketing. Initially, about 80% of our spending was on R&D, and even today, we invest about 40% back into R&D. This includes building libraries, frameworks, platforms, self-sufficient software, know-how of building and running our own data centers, and labs investment to bring futuristic technologies to solve customer problems.

Zoho Corp's AI team, ZLabs, spends 60% of its time identifying new trends and 40% productionizing their findings. The team is split into:

  • A research team to review academic literature and evaluate if these features would be of value to our solutions.
  • An implementation team to determine how they can be integrated into enterprise products.
  • A forward deployment team that works with product teams, understands customer requirements, and ships AI products through those teams.
  • An operations group to maintain AI models and scale up in a cost-effective manner when required.

Breadth of AI work at Zoho Corp

Zoho AI

Putting failure to work

Creating a sustainable business comes with struggles and failures. While the world sees and celebrates your victories, it's important to acknowledge the challenges you faced along the way. In the early stages, our teams came up against obstacles and failures, some more growth-defining than others. Rather than viewing it as a setback, we saw it as an opportunity to course-correct and do better.

Between 1997 and 2000, AdventNet worked with researchers and businesses to come up with an innovative solution to deliver telecommunication capabilities to Indians. People were looking for alternative means for telecommunications, because deploying landlines was a difficult task with such a massive population. Without going into the details behind the technology, the solution enabled a wireless connection between individuals to the nearest telephone exchange, eliminating a large chunk of underground telephone lines.

Together, we made significant progress and even tested out pilot projects in small regions. At the same time, something unexpected occurred; cellular mobiles took the world by storm—economical, convenient, and completely wireless technology that made the entire project redundant. The teams who worked on the project embraced it as a learning moment, about how quickly technology evolves, and worked on the next ManageEngine product.

The learning curve didn't end there. Even today, we're learning from our mistakes and making changes to our processes. Implementing AI taught us a valuable lesson. The first AI functionality was implemented on our network monitoring solution. It provided suggestions to users about upcoming events. The feature went out after a year and a half of research. After it went live, the research team waited eagerly for feedback from customers, but they didn't hear anything. A week went by, two weeks, and still no response. Finally, after two weeks of waiting, a customer reached out. "How do I turn this off?" The customer didn't want the AI recommendations.

In hindsight, they were right to feel that way. We were trying to retrofit AI into existing ManageEngine products that have been around for a long time. Some customers have been using them for almost a decade. Out of the blue, we launched an AI feature that is trained on the last seven days of customer data and displays suggestions. This didn't make sense.

We had to take a step back and reassess our approach to AI. We decided to introduce explainable AI. Instead of just predicting events, the model explained why an incident was expected. For instance, we have an outage prediction module in the network monitoring tool. It would say, "I'm expecting an outage in an hour because the server's CPU is facing an unusual spike when compared against most Fridays." Once we added these finer details, customers were able to appreciate the feature and integrate it with their IT processes.

Get fresh content in your inbox

By clicking 'keep me in the loop', you agree to processing of personal data according to the Privacy Policy.