In 2007 Marc Andreessen, co-founder of Netscape as well as the VC firm Andreessen Horowitz, published a blog article titled “The only thing that matters”. You can still read it today. It is considered to be the driver that pushed the concept of Product-Market Fit into the spotlight for tech startups.
The idea of Product-Market Fit was originally coined by Andy Rachleff, a VC and startup CEO, who Andreessen quotes in his article:
Rachleff’s Corollary of Startup Success:
The only thing that matters is getting to product/market fit.
Rachleff started out in VC when tech startups were hardware focused – making a disk that was 10x faster or a router with 1/10 the latency. The challenge for hardware startups was always could they actually build the hardware that delivered the numbers they were promising. If they did, customers would throw money at them. These hardware startups had a high technical risk, but a very low market risk. A 10x improvement in anything would sell itself.
With the rise of the internet, software companies began to dominate the startup scene, but they had the opposite problems of hardware startups – very low technical risk and very high market risk. A software startup, and their investors, could have high confidence that they would be able to deliver the product – it was just coding, not wrestling with physics or manufacturing – but no way to be sure if anyone would buy it.
This had an interesting effect on startup investment, splitting the market into seed investors who were willing to shoulder the risk to help startups launch and, once their market was proven, traditional venture capital who would pay a premium to buy-in at later rounds.
So, product/market fit is important, it’s valuable, and it’s essential if a startup wants to grow into a unicorn.
Product/Market Fit sounds a lot like making sure a product is right for a market. That it “fits”. And it seems obvious. This gives too much emphasis on the market and misses that Product-Market Fit is a two piece puzzle for the startup. It needs both pieces.
The market needs to also fit the product. It needs to want the product. It needs to want the product badly enough to pay a good price for the product. And it needs to be big enough to buy lots of the product. You can’t build a business on a great product that everyone in the market wants and loves if the market has only a handful of customers.
Perhaps it should really be Product & Market Fit.
To explore Product-Market Fit we’re going to look at the example of Vanta, a certification compliance tool provider, and their journey to product-market fit and how, in creating their product, they created a market everyone else had overlooked or underestimated.
Vanta helps businesses achieve and maintain certification for a variety of standards. Initially they focused on SOC 2, a US-centric standard but often held by non-US companies who want to do business with US organisations. They have since expanded into ISO27001, HIPAA, GDPR and other certifications.
They use a subscription-based model, and given the importance of certification and the huge amount of time it takes to obtain, monitor and maintain it, they can charge 4,5, and even 6 figure annual subscription fees depending on the size of the client and the certifications they require. And the clients are happy to pay them because they understand how much money Vanta is saving them.
So Vanta is in a nice spot (ignoring competition for the moment). But it took a while to get there. The founder and CEO, Christina Cacioppo, persisted through 4 major stages of product experience over the 5 years it took for her to go from starting to create products to finding and developing the idea that would lead to Vanta.
Cacioppo got her start working as an analyst for Union Square Ventures in New York, where she spent her time evaluating companies seeking investment and meeting with their founders.
Regular meetings with founders through her role gave her the confidence to start her own company:
“I got to the point where I said, ‘I do want to go start a company.’ But I wanted it to be a software company, and I didn’t know how to code. And I knew a lot of non-programmers started companies, but I didn’t want to go that route. So I resigned, took my bonus and taught myself to code and build products,” she says.
The products she built were not successful. They included a book tracking website, a video messaging app for Android and a startup job board. While none of her ideas grew into sustainable businesses, they did teach her that she needed product development experience. So for the next stage in the Vanta product path she joined Dropbox in 2014 and worked for two years as a product manager on their Paper product.
It was here that she got her first exposure to security and compliance. As a product manager on Paper, she wanted Dropbox to push it to new accounts when they signed up for Dropbox’s file sharing service. Dropbox’s legal department shut that down, telling her that Dropbox was SOC 2 compliant and Paper wasn’t, bundling Paper would invalidate contracts requiring compliance, and that it would take 18 months to make Paper SOC 2 compliant. For Cacioppo that was the end of bundling Paper with Dropbox.
In 2016, after two years at Dropbox, Cacioppo left to try again at starting her own company. Following trends at the time – speech was big, Amazon Alexa was exciting – she pursued several product ideas.
First was an AWS drop shipping solution for e-commerce merchants. Next was an AI tool for transcribing meetings. Another was a microphone that transcribed notes into Slack. Part of the problem was that speech-to-text AI wasn’t really there yet, and the other was that these products were generic “business tools” that no-one really wanted..
She then took a step back and tried to find a use case for the speech tech they had spent so much time with. She found that use case in biologists doing lab work.
“They’re doing things with their hands, they have gloves, they’re working with chemicals. Imagine trying to type out notes while you’re cooking a complicated meal,” Cacioppo explains.
The biologists loved the microphone and iPad app Cacioppo and her team built for them. She had product fit. But there was a problem – “…the market for this was the size of my thumb…,” Cacioppo explains.
There was no market fit. She had a great product, but not one that a sustainable business could be built around.
At this point Cacioppa and her team stopped relying on brainstorming product ideas and took a leaf from the Lean Startup playbook and began a process of customer discovery.
“We decided we weren’t allowed to build anything at all. We had to just talk to people—and talk to them until we had a lot of confidence and a mental model of customers, their jobs, the problems they might have and how we might solve them.”
This is where Caciopppo’s time at Union Square Ventures and Dropbox paid dividends. She had a network of potential customers she could talk to, including coworkers who themselves had moved onto to found their own startups.
But when you talk to your potential customers, who can have widely different experiences, how do you know when you understand their common problem well enough to start building?
Her team had a nice heuristic for making this call – “…we decided we had to keep having these conversations until three-quarters of it was stuff we already knew”.
By sitting down with people and doing basic things like talking through their calendar with them – the highlights and the challenges – it did not take long to stumble onto security as an area of interest, and compliance as a challenge within that area.
It did help that Cacioppo was aware of how many large security businesses there were. It wasn’t just a large market, it was enormous.
Drawing on her own difficulties with compliance working on Paper at Dropbox, Cacioppo and her co-founder at the time decided to focus on SOC 2 compliance.
SOC 2 ensures service providers handle their clients’ data securely and responsibly. It is detailed and covers security, integrity, availability, privacy and confidentiality of data. Which also covers just about every process in a business. They all touch data to deliver their services.
An established strategy for founders early in their product ideation is to do things that don’t scale. One day their SaaS will be a software driven unicorn, but today, at the beginning, they need to do things manually, in person, to validate their product with their first customers. It’s hard work and people are reluctant to try it. Cacioppo wasn’t.
The first version of Vanta wasn’t even software. It was a consultation and a spreadsheet. Caciooppo interviewed the team at a startup called Segment and produced a gap assessment in spreadsheet form that they could use to guide Segment’s SOC 2 compliance.
Their next step was to ask the question, “Would this spreadsheet work as-is for anyone else?”. That is, was SOC 2 compliance standardisable? Could they productise it? And the answer, after the same spreadsheet was well-received by a second startup, was “Yes”.
So, like many startups of the era, Vanta took a spreadsheet and built an SaaS around it. But initially there wasn’t much Software in their Service. There were forms where customers could enter AWS credentials, but behind the scenes Vanta employees, often Cacioppo in the early days, would pull the data by hand and enter it manually before returning the report to the customer. They told these early customers that their software “was slow”.
From that manual start they continued to build out the software to automate and integrate the compliance process with customers’ operations.
At this time Vanta was accepted into the Y Combinator startup accelerator program. This gave them access to hundreds of other startups who became a source of customers.
SOC 2 compliance is all about proving that you have all the necessary security controls and procedures in place to protect client data. Achieving compliance is time consuming and expensive. More so if you need to pay a consultant to guide you through the process.
By productising the SOC 2 compliance process – creating a user-friendly interface to the numerous checklists, developing integrations to third party service providers, providing progress tracking and team features like collaboration and task assignment – Vanta took a time-consuming process that required expertise and a huge amount of domain knowledge and reduced it to the point where it was almost a data entry task. And that task could be shared across an organisation instead of having one person, or a small team, devoted full time to carrying it out.
The outcome of this was an increase in the number of businesses achieving SOC 2 certification. And as a startup, why wouldn’t you? It was required for some industries, like financial services and healthcare, and it was required by enterprise clients who saw it as part of securing their supply chain. It made your business look good and at the same time increased your own security by forcing the adoption of best practices across all of your processes.
Those businesses achieving SOC 2 compliance were doing it through Vanta. Between their initial seed round with Y Combinator in 2018 for $3,000,000, when they had a handful of clients, and their series A round in May 2021, Vanta had grown to 1000+ customers and ARR of $10,000,000.
These numbers, along with their $50,000,000 Series A round at a pre-money valuation of $500,000,000, raised a lot of eyebrows and inspired competitors to jump into the market, including:
Despite the competition, the market continues to grow and Vanta grows with it. Vanta now has over 5,000 customers and their current valuation stands at $1.6 billion, a true unicorn in a market they helped create.
A story like Vanta is great because we get to see the missteps and the delays, allowing us to skip straight to what works.
What works is customer discovery, validation, and having a network.
Customer discovery works. Talking to potential customers, as many as you can. Listening to them and understanding their problems and finding the commonalities that can point towards a product.
But customer discovery may not be straightforward. Cacioppo had distinct advantages when she started the process. Her time at Union Square Ventures gave her an insider’s knowledge of how venture capital and startups worked. Not just the process, but as an analyst she would have had access to information on business structures and budgets, etc. And she came away with the beginnings of a success-focused network.
Her time at Dropbox helped expand her network and gave her not just practical product experience, but also firsthand experience at the challenges businesses face as they execute. For her, it was the challenge of compliance.
Finally, validating your product works. Validating your product idea in the quickest and most immediate way possible, even if it is manual and hard to do. Better a week of long hours over the keyboard rather than months of coding to discover no-one cares.
There is no predictable path from idea to unicorn, but hopefully this article pointed the way to shortening your path.
The lessons might save you a few years of development on your next product. Or you might now be thinking it’s time to get a job and build up experience and contacts in the industry you’re interested in. That could also save you a few years in the long run as you search for the perfect product.
Unlocking Growth: Smart Resource Allocation For Digital Products
Your business is built on software and software is expensive to build because it is hard to build. You’re an SME so allocating your limited resources strategically is essential. This article will cover how to assess your current resources and capabilities, identify the best areas for innovation and ROI, and allocate resources to support your strategic growth initiatives.
It’s a complex process, so we’re going to give a quick overview of the strategies and decision-making involved so you know where to start and what you should be looking for.
To illustrate resource allocation we’re going to use the same imaginary company we use in all the articles in this series – Conferensity. They’re a B2B event management platform with aspirations to become a comprehensive conference solution.
Their strategy involves expanding their web-based software’s capabilities and developing two apps – one for conference admins and another for attendees.
While these are three digital products, 1 website and 2 apps, there is actually a fourth project – the updated backend that will implement the features the website and apps will provide interfaces for. This backend will also require extensive work to provide all the necessary admin features to manage the new frontend features. By dedicating their resources to these four digital projects Conferensity is actively working towards achieving their primary business objective.
Here is some of what they want their products to do:
Their first app will be for conference admins, duplicating the client-accessible website’s functionality. The second app, designed for attendees, will allow attendees to reserve seats, purchase tickets to the conference, and pay for any related services, purchase refreshments and merchandise. It will also have features like schedules and venue maps, presentation start reminders, live questions via chat rooms, presentation feedback, and special features for speakers.
Their strategy encompasses a website overhaul and the launch of two apps supported by a robust back-end system. The success of these digital projects is critical to achieving their goal of increased market share and revenue.
A comprehensive evaluation of your resources is essential for effective strategic planning. This assessment provides a clear understanding of your strengths, weaknesses, and areas for improvement, enabling you to make informed decisions about resource allocation and project prioritisation.
There are a lot of resource areas you can look at. But we think the five below are the ones that are going to really make or break your plans. By carefully examining the following five key resource areas, you can ensure that your project goals are in line with your capabilities:
To identify areas for growth, you need to conduct an analysis of market trends, customer needs, the competitive landscape, and your own internal capabilities. This assessment provides insights to pinpoint opportunities that can drive growth.
The process begins with market research to understand emerging industry trends and identify customer needs. By engaging with customers through surveys, feedback mechanisms, and user testing, you can gain insights into their experiences and expectations.
This customer-centric approach is essential for uncovering opportunities to enhance the product or service offering in meaningful ways. Equally important is benchmarking against competitors to identify areas where your business can differentiate and capture a larger share of the market.
Analysing the competitive landscape reveals gaps in the current solutions, allowing you to strategically position your innovations to stand out and appeal to target customers.
In parallel, you must assess your own internal capabilities, evaluating the technological feasibility of potential innovations. This involves conducting technical studies to ensure the planned developments can be realistically achieved with the available resources and expertise.
Financial projections are then used to estimate the potential return on investment, considering the costs of development, marketing, and sales efforts.
By taking this comprehensive approach, you can identify the most promising areas for innovation – those that are not only desirable from a market perspective, but also viable and financially sound for you to pursue.
This strategic alignment between market needs, competitive positioning, and internal capabilities is crucial for maximising the impact and return on innovation investments.
Conferensity, a B2B event management platform, provides a good example of how to identify areas for innovation that can deliver ROI. Let’s break down their process:
When allocating resources, you need to make strategic decisions to ensure each investment supports the company’s growth goals and maximizes ROI. Let’s look at the decisions Conferensity made and the reasoning behind them. This can serve as a guide for other SMEs allocating their resources.
Conferensity recognised that the attendee app was a critical touchpoint for user engagement and revenue generation. They decided to allocate their most skilled developers to this project, ensuring that the app’s user experience and functionality would be of the highest quality.
This focus on development resources was based on the understanding that a superior attendee app could significantly enhance the overall event experience, leading to increased user retention and higher transaction volumes.
To efficiently progress with their backend development, Conferensity opted for staff augmentation by partnering with a near-shore team extension provider. This strategy allowed them to extend their software development team with experienced professionals who could handle the standard backend features. The benefits of this approach included cost savings compared to hiring full-time staff, access to a broader skill set, and the ability to scale the team up or down as needed. This flexible staffing solution enabled Conferensity to maintain focus on their core competencies while ensuring timely and efficient progress on their backend infrastructure.
Investing in cloud infrastructure was a strategic move for Conferensity to support the scalability and reliability of their expanded platform. They understood that as their user base grew, the demand on their system would increase. By choosing a scalable cloud solution, they ensured that their platform could handle this growth without compromising performance, thus providing a seamless experience for their users and maintaining a high level of customer satisfaction.
Conferensity allocated a significant portion of their resources to the marketing and sales efforts for the new apps. They targeted high-profile events to generate buzz and early adoption, understanding that these events would provide the visibility needed to attract more users. The alignment of marketing and sales efforts was crucial to ensure that the message about the new apps reached the right audience at the right time, ultimately driving adoption and revenue.
Adopting agile practices allowed Conferensity to manage their resources with flexibility. They could swiftly reallocate resources in response to feedback and market demands, ensuring that their projects remained on track and aligned with user needs. This approach to resource management was instrumental in allowing Conferensity to adapt to changes quickly, whether in response to user feedback or shifts in the market landscape.
The allocation of resources for web and app development is an important task for you when pursuing innovation and market share. By evaluating your current resources and capabilities, identifying areas with potential for growth and ROI, and making decisions about resource allocation, you can support your growth initiatives. This approach ensures that each investment is aligned with your company’s objectives, utilising resources and positioning your business for success in the digital landscape.
Getting Resource Management Right In Active ProjectsYou’ve spent weeks, maybe months, working out detailed plans on how you’re going to make building your app or your website work. But reality isn’t very cooperative. Perhaps your schedule starts to drift. Maybe revenue doesn’t hit the level you predicted. Does your plan need to change or do you simply need to take a different approach?
This article explores strategies for optimising resource allocation and management and navigating trade-offs between time, cost, and quality.
This is a broad topic, a huge topic, so for this article we will focus on five key resources:
To illustrate resource management we’re going to use the same imaginary company we use in all the articles in this series – Conferensity. They’re a B2B event management platform with aspirations to become a comprehensive conference solution.
Their strategy involves expanding their web-based software’s capabilities and developing two apps – one for conference admins and another for attendees. While these are three digital products, 1 website and 2 apps, there is actually a fourth project – the updated backend that will implement the features the website and apps will provide interfaces for. This backend will also require extensive work to provide all the necessary admin features to manage the new frontend features.
By dedicating their resources to these four digital projects Conferensity is actively working towards achieving their primary business objective. Here is some of what they want their products to do:
manage venue communication and arrangements (such as contracts, venue layout, and extra services like conference Wi-Fi, local tours, etc)
handle communication with speakers, panel members, and guests (including contracts, accommodation, presentation reservations, and travel), and
provide opportunities to make purchases through the app for everything from refreshments to accommodation and perhaps one day flights.
Their first app will be for conference admins, duplicating the client-accessible website’s functionality. The second app, designed for attendees, will allow attendees to reserve seats, purchase tickets to the conference, and pay for any related services, purchase refreshments and merchandise. It will also have features like schedules and venue maps, presentation start reminders, live questions via chat rooms, presentation feedback, and special features for speakers.
Their strategy encompasses a website overhaul and the launch of two apps supported by a robust back-end system. The success of these digital projects is critical to achieving their goal of increased market share and revenue.
Broadly, resource management needs a strategic approach to ensure that every investment contributes to your overarching business goals. These are the basic strategies you should apply across every aspect of your product development:
Now let’s look at specific recommendations for each of the 5 main resources we’re concerned with.
When it comes to managing your project’s financial resources, the first step is to create a detailed budget plan. This means breaking down the costs for each phase of the project and allocating funds accordingly. It’s important to be as comprehensive as possible, accounting for everything from software licences to employee salaries. Using tools like Xero or QuickBooks can make this process a lot easier, helping you track expenses and stay within budget. But creating a budget is just the beginning.
To really stay on top of your project’s finances, you need to schedule regular financial reviews. These assessments give you a chance to take a closer look at where you’re overspending or underspending, and make adjustments as needed. It’s a bit like checking your project’s financial vital signs – if something looks off, you can take action before it becomes a bigger problem.
Even with the most careful planning, unexpected expenses can still pop up. That’s why it’s smart to set aside a portion of your budget as a contingency fund. This is basically a financial safety net that ensures minor setbacks don’t completely derail your project.
How big is the contingency fund? That depends on your risk factors. If you have a lot of confidence in your success, then 5% – 10% is a good rule of thumb. If you are building a new product and your team’s experience is limited in this area, then 10% – 20% is the norm, but you might want to go even higher.
Conferensity chose to allocate a significant portion of their financial resources to the development of their backend system, as it was the foundation for their website and apps. They also set aside a contingency fund to cover any unexpected costs that might arise during the development process.
When it comes to managing your project’s human capital, the first step is to take a good, hard look at your team’s skills. You need to figure out where the gaps are and decide whether it makes sense to train your existing staff or bring in some fresh talent. It all depends on your timeline and how quickly you need to get things up and running.
Hiring a bunch of new people can cause major delays, delays that may not make sense if you only need their skills for a short time. That’s where flexible staffing comes in. Consider extending your team via staff augmentation with some near-shore software developers. They can help fill in those skill gaps and handle any temporary spikes in workload without the long-term commitment.
But here’s the thing: no matter how you structure your team, you need to keep them motivated and engaged. It’s not just about getting the work done; it’s about making sure your team is happy and invested in the project’s success. Regular feedback sessions, recognition programs, and opportunities for growth can go a long way in retaining your top talent. Losing the wrong people is a risk you need to manage because it can cause huge setbacks.
In Conferensity’s case, they decided to invest in short course targeted training for their existing development team to make sure they had the skills to build the critical backend system. But they also brought on board an extended team to help with the frontend development of the website and apps. By using a mix of training and flexible staffing, they were able to get the right people in place to make their project a success.
No-one invests in bare metal infrastructure out of the gate any more. Cloud-based services like Amazon Web Services (AWS) or Microsoft Azure can host your applications and store your data, offering scalability and cost-effectiveness as you find your market. As your business grows, you can easily scale resources without investing in expensive hardware. With the cloud, you pay only for the resources you use, reducing overall IT costs until you reach unicorn scale.
Cloud providers also offer security and reliability, including protections against cyber threats like DDOS, and ensuring data accessibility through robust backup and disaster recovery solutions.
But the cloud is just part of your solution. To streamline your development process, you should look into agile development tools. Jira or Trello, for example, are project management tools that can help you keep your team on track and ensure everyone is working towards the same goals.
When it comes to version control, tools like GitHub or GitLab are important. They allow you to track changes to your codebase and collaborate with your team. Continuous integration and deployment (CI/CD) is also worth considering. By automating your build, test, and deployment processes using tools like Jenkins or CircleCI, you can improve your development process. This streamlines your workflow and reduces the risk of manual errors that can slow you down.
Conferensity opted to host their backend system on AWS, taking advantage of its scalability and reliability. They also implemented Jira for project management and GitHub for version control, enabling their development team to work efficiently and collaboratively.
Process automation is a strategy for enhancing operational capabilities and improving efficiency in web and app development. By identifying repetitive tasks and automating them using tools like Jenkins, Ansible, or Puppet,businesses can save time and reduce the risk of errors. This allows development teams to focus on other aspects of the project, contributing to faster delivery and quality products.
Data-driven decision making is another aspect of effective resource management. By leveraging data analytics tools such as Google Analytics or Mixpanel, businesses can gain insights into user behaviour and preferences. This information can then be used to make decisions about product development, prioritising features and improvements that align with user needs and expectations.
By basing decisions on data rather than assumptions, businesses can optimise their resource allocation and ensure that their products meet market demands.
Finally, implementing a quality assurance and testing process is important for ensuring product excellence. This includes testing and bug fixing throughout the development lifecycle. Automation tools like Selenium or Appium can streamline the testing process, reducing the time and effort required while improving the accuracy and consistency of results. By catching and addressing issues early on, businesses can avoid rework and delays, delivering quality products to their users.
Conferensity implemented a data-driven approach to decision making, using analytics tools to track user behaviour and gather feedback. This allowed them to prioritise features and improvements based on actual user needs.
Conducting competitive analysis helps identify opportunities to differentiate your offerings in the market. Tools like SEMrush and Ahrefs provide insights into your competitors’ online presence and marketing strategies, enabling you to make decisions about your own positioning.
Establishing a customer feedback loop helps ensure that your product development aligns with user needs and expectations. By gathering feedback through surveys, interviews, and social media monitoring, you can understand your customers’ preferences and pain points, allowing you to prioritise features and improvements.
Exploring partnerships with businesses or influencers can help expand your reach and strengthen your market position. By collaborating with companies that offer products or services that complement your own, you can tap into new customer segments and enhance the value proposition of your platform.
Conferensity actively monitored their competitors’ offerings and sought out feedback from their users to ensure they were meeting market demands. They also formed strategic partnerships with event venues and vendors to enhance their platform’s value proposition.
In this article we’ve explored various strategies for optimising resource allocation and management in live projects. By focusing on five key areas – financial resources, human capital, technological assets, operational capabilities, and market position – you can effectively navigate the challenges of web and app development.
Through the example of Conferensity, we’ve seen how these strategies can be applied in practice. By creating a detailed budget plan, investing in team training and flexible staffing, leveraging cloud-based services and agile development tools, implementing process automation and data-driven decision making, and conducting competitive analysis and establishing customer feedback loops, Conferensity was able to successfully develop their backend system, website, and apps while strengthening their market position.
Ultimately, effective resource management requires a strategic and holistic approach that aligns with your business objectives and adapts to changing circumstances. By implementing the strategies and practices discussed in this article, you can optimise your resource allocation, improve efficiency, and deliver experiences that meet the needs of your users and thus drive revenue.
Prioritise Success: Set & Track KPIs for Web & App ProjectsWhen it comes to web and app projects, success is not solely determined by meeting deadlines and adhering to budgets. While these factors are undoubtedly important, the true measure of success lies in delivering tangible value that positively impacts your business. This is where Key Performance Indicators (KPIs) play a crucial role.
KPIs are not merely figures on a spreadsheet; they are the vital signs of your project. They provide a clear indication of whether you are progressing as planned or deviating from your intended course. KPIs serve as a guiding light for your decisions, ensuring that you remain focused on the most critical aspects of your project.
In this article, we will cover the importance of setting and tracking KPIs for web and app projects. We will explore a case study of Conferensity, a B2B event management platform, to illustrate how aligning KPIs with strategic goals can drive success.
Additionally, we will discuss the process of identifying the right KPIs, tools and techniques for effective tracking, and the significance of analysing and reporting on these metrics. By the end of this article, you will have a comprehensive understanding of how to prioritise success in your web and app projects through the disciplined application of KPIs.
To illustrate the importance of KPIs, let’s consider Conferensity, a B2B event management platform with aspirations to become a comprehensive conference solution.
Their strategy involves expanding their web-based software’s capabilities and developing two apps – one for conference admins and another for attendees.
While these are three digital products, 1 website and 2 apps, there is actually a fourth project – the updated backend that will implement the features the website and apps will provide interfaces for. This backend will also require extensive work to provide all the necessary admin features to manage the new frontend features. By dedicating their resources to these four digital projects Conferensity is actively working towards achieving their primary business objective.
Here is some of what they want their products to do:
Their first app will be for conference admins, duplicating the client-accessible website’s functionality. The second app, designed for attendees, will allow attendees to reserve seats, purchase tickets to the conference, and pay for any related services, purchase refreshments and merchandise. It will also have features like schedules and venue maps, presentation start reminders, live questions via chat rooms, presentation feedback, and special features for speakers.
Their strategy encompasses a website overhaul and the launch of two apps—one for conference admins and another for attendees—supported by a robust backend system. The success of these digital projects is critical to achieving their goal of increased market share and revenue.
When it comes to selecting KPIs for web and app projects, businesses must consider a wide range of metrics that cover various aspects of their operations. These KPIs can be broadly categorised into financial, customer, operational, and product-related metrics.
Financial KPIs, such as Monthly Recurring Revenue (MRR), Annual Recurring Revenue (ARR), and Customer Lifetime Value (CLV), provide insights into the financial health and growth potential of the business. These metrics help in making informed decisions about resource allocation, pricing strategies, and long-term financial planning.
Customer-related KPIs, including Customer Acquisition Cost (CAC), Churn Rate, Customer Retention Rate, and Net Promoter Score (NPS), focus on understanding the effectiveness of customer acquisition and retention strategies. These metrics help businesses identify areas for improvement in customer experience and loyalty.
Operational KPIs, such as Support Ticket Volume and Resolution Time, Code Deployment Frequency, and Server Uptime/Downtime, measure the efficiency and effectiveness of internal processes. These metrics are crucial for ensuring smooth operations and delivering high-quality services to customers.
Product-related KPIs, including Active Users (Daily/Monthly), User Engagement Score, Feature Usage Rate, and Product Stickiness (DAU/MAU Ratio), provide valuable insights into how users interact with the product. These metrics help in making data-driven decisions about product development, user experience improvements, and feature prioritisation.
For Conferensity, a B2B event management platform, the most relevant KPIs would likely include:
By focusing on these key metrics, Conferensity can make data-driven decisions that align with its strategic goals and drive the success of its web and app projects.
Tracking KPIs effectively requires a combination of tools and techniques tailored to specific metrics.
For financial KPIs like MRR, ARR, and CLV, businesses often rely on financial software such as Xero, QuickBooks, or NetSuite.
These tools can be integrated with billing and subscription management platforms like Chargebee or Recurly to automate revenue tracking and reporting.
Customer-related KPIs, including CAC, Churn Rate, and NPS, can be tracked using CRM systems like Salesforce, HubSpot, or Pipedrive, and pro-actively managed using services like ChurnZero, Totango and GainSight.
These platforms allow businesses to manage customer interactions, monitor customer health, and analyse customer feedback.
Specialised tools like Delighted or Promoter.io can be used to collect and analyse NPS data.
For operational KPIs, such as Support Ticket Volume and Resolution Time, tools like Zendesk, Freshdesk, or Jira Service Desk are commonly used.
These platforms help streamline support processes, track ticket metrics, and generate performance reports.
For monitoring Code Deployment Frequency and Server Uptime/Downtime, businesses may use tools like New Relic, Datadog, or Pingdom.
Product-related KPIs, such as Active Users, User Engagement Score, and Feature Usage Rate, can be tracked using product analytics platforms like Mixpanel, Amplitude, or Heap.
These tools require instrumenting the app or website to send user interaction data to the analytics platform. This data is then processed and visualised in dashboards and reports, enabling businesses to gain insights into user behaviour and product performance.
In the case of Conferensity, they would likely use the following tools and techniques to track their key KPIs:
Tracking is only the beginning. The real value comes from analysing the data to gain insights and making informed decisions. Services like Tableau or Microsoft Power BI or Google Looker can transform raw data into actionable intelligence. Regular reporting is crucial, but the frequency and format may vary depending on the KPI and its significance to the business.
For critical KPIs like revenue, customer acquisition, and churn rate, companies often opt for real-time dashboards that provide a constant pulse on performance. This allows decision-makers to quickly identify and respond to any deviations from the expected trajectory.
Other KPIs, such as customer satisfaction scores or feature usage rates, may be reviewed on a weekly or monthly basis, as they tend to change more gradually. These periodic reviews provide an opportunity for deeper analysis and strategic planning.
Many companies adopt a multi-tiered approach to KPI reporting. Operational teams may review granular metrics daily or weekly to optimise their processes, while executive teams focus on high-level KPIs in monthly or quarterly business reviews. This ensures that insights are communicated effectively across the organisation and that each level can take appropriate actions.
To maximise the impact of KPI insights, you need to build a culture of data-driven decision-making. This involves providing training and resources to help your employees interpret and act on KPI data. Regular cross-functional meetings, where teams share their KPI learnings and collaborate on improvement strategies, can also be highly effective.
The optimal reporting cadence and review process will depend on the unique needs and goals of your business. The key is to strike a balance between timely action and thoughtful strategic planning, ensuring that KPI insights are consistently translated into meaningful business outcomes.
Incorporating KPIs into your business strategy is a process that requires careful planning and execution. Here are some key steps to help you effectively implement KPIs:
For Conferensity, and any business investing in digital growth, the disciplined application of KPIs is not optional – it’s a cornerstone of strategic decision-making. By setting, tracking, and analysing the right KPIs, you ensure that every step you take is a step toward success.
The Future of Business is Core+There’s an interesting interview on VentureBeat with Gilles Langourieux of Virtuos. Most of the interview is about the gaming industry, but Gilles says some interesting things about dynamic team sizing and how businesses are changing the way they structure their software development teams.
SoftwareSeni isn’t part of the gaming industry, but these ideas tend to cross industry lines, and since remote teams are a big part of what we do, we’re always interested in new takes.
Here’s one of Gilles’ quotes. The game industry is notorious for hiring fast and firing faster, but flexibility in team size can benefit any industry:
We’ve been relying too much on fixed teams. We need more flexibility. We all know that there’s a peak in production, and then after that peak you need fewer people. If you’re trying to develop games with fixed teams only, you have that vicious cycle of hiring a lot of people and then letting them go when you’re done
As Gilles points out, having providers who can supply talent on demand is good for everyone:
…one of our main reasons for existing is to provide both quality and flexibility to our clients, so that everyone’s jobs are more stable. You have smaller core teams that are more stable on the customer side, and then we also maintain stability by being able to transition people from one project to another, one client to another.
He’s pointing the way towards a particular vision of the future of software development. This concerns every SME that wants to be more than a lifestyle business, because every business is a software development business now.
The important phrase is “smaller core teams”. The vision is that your inhouse software development team is small and contains the deep understanding of your product and directs your long term strategy for product growth.
The core team gives you continuity and quality control while overseeing a managed remote team that shifts dynamically in size and skill set.
It’s a thought-provoking idea. Because it’s not really about reducing fixed costs in software development.
It’s really about SMEs adopting a new level of flexibility, making them able to react quickly in the market, and, more importantly, allowing them to push their products forward based on vision instead of based on office headcount or inhouse skillbase.
It’s a fundamental shift in mindset that we’re going to see more and more businesses adopt because it’s going to deliver results.
We’re calling it Core+ – your core team at the heart of an on-demand team that changes in size and structure to meet the latest challenge.
If you want to take the Core+ approach you’re going to need your in-house team to already have experienced oversight roles in place. This is essential to maintain control over your product vision. Let’s go quickly through the roles you’ll want to have in place.
This core team should include a product manager who deeply understands the market, customer needs, and the overall product strategy. They will be responsible for driving the vision and roadmap for the product.
Alongside the product manager, you’ll want an experienced technical lead or software architect. This role will ensure the technical feasibility of the product vision and make the high-level design decisions. They’ll also be responsible for maintaining code quality standards across the extended team.
Having a strong project manager with remote team management experience is also essential. They’ll coordinate the efforts of the core team and extended remote team. Effective communication, deliverables, and risk management will be key (though of course you’ll be running Agile to severely limit the risk).
Depending on the size of your business and your product’s complexity, you may also want to include other roles like a UX designer, data analyst, or operations manager. The goal is to have a well-rounded core team that can provide leadership and direction across all aspects of the product.
If you need to hire to build your core team, look for people with experience managing remote teams and/or working with remote/hybrid teams. Everyone on the team should be a strong communicator – they’re all going to be involved in presenting a coherent vision of your product to your remote team. As that remote team shifts in size across product and business cycles it is communicating and maintaining the product vision that is going to determine your success.
With the right core team in place you can strategically, even tactically, leverage remote teams to drive growth. You’ll be able to adapt to market changes and scale development up or down as needed, all while maintaining a strong product vision and high quality standards.
This Core+ approach can give you a competitive edge in the marketplace. By making team scaling responsive you get to decide where you want to place your big bets and how fast you want to move. This level of control is what helps you take the lead and stay there.
AI is going to make remote teams and video calls less frustratingWhile COVID changed the nature of work for lots of people, at SoftwareSeni online and remote work has always been a part of our experience.
Team extensions for software development is the key service we provide our partners. Regular virtual stand-ups, code reviews, as well as one-to-one meetings for things like discussing design decisions and troubleshooting are part of the normal working day for our teams.
As comfortable as we are with video meetings, we are quick to adopt any practices or tools that can help us streamline and get the most from them. So we’re looking forward to trying out the new AI powered features announced by Zoom and Google.
Let’s have a look at the features we think will make a real difference for everyone who relies on video meetings. Having said that, this quick overview is biased towards mixed local/remote software development teams.
This one seems like it is targeted at meeting with multiple participants, but we think this will help everyone. First, being freed from taking notes so you can focus on what is being discussed is a big win. Second, having a thorough summary of even a one-on-one meeting means no-one has to worry about the quality of their note-taking skills or their memory.
We’re also assuming that these summaries end up being incorporated into, say, your Google Drive, if you are using Google Meet. Then they become searchable and discoverable and easy to incorporate into your planning and follow-ups.
This is an extension of the previous feature, but is worth pointing out. What we like about this is, again, documenting the meeting is handled by Google’s or Zoom’s AI. They’ll extract the major talking points and action items for anyone who missed it.
If that’s not enough detail, both services offer transcripts if you need the full text of the meeting.
Like watching Netflix with subtitles on, but more useful. Captions on meetings can really help with the inconsistent audio across multiple speakers.
Both Google Meet and Zoom also provide the option to deliver translated captions. We find that despite our multilingual team’s high level of language mastery, translated captions improve the clarity and quality of communications in video meetings.
The captioning and translation is not perfect, but like watching movies, the combination of speech and text reduces the effort and concentration needed to follow what is happening.
This isn’t really a thing we care about, but maintaining privacy by having your background blurred out or replaced is an option we can get behind. We should all want video calls to be comfortable for everyone.
Duet for Google Meet is also promising “studio lighting” and other image enhancements, as is Zoom. This is starting to veer into the world of “filters” and how we appear on camera compared to “reality” (remember the lockdown boomers rendered as potatoes on call?). But again, if these features help team members feel more comfortable on call or even simply enhance everyone’s experience by providing clearer views of each other, then we’re all for it.
Zoom’s AI Companion is all about deepening integration with their suite of calendar and messaging tools. We’re a Google Workspace shop, so Zoom’s features like AI-powered meeting scheduling is not going to impact us. The nature of software development doesn’t really require it. You’re either participating in scheduled stand-ups, etc, or you’re hopping on a call you’ve just coordinated via chat.
Zoom also has features that monitor and report on conversations. They sell it as a feature to help improve the quality of your sales team’s calls. Not the kind of thing our developers need for their day-to-day interactions with other team members.
We’ll let you know, but we suspect that, despite the immense investment in tech and infrastructure these features represent, we’re only going to see an incremental improvement, more of a quality of life increase than an efficiency multiplier.
Calls will go more smoothly because captions and translations, as well as transcripts and summaries, will allow everyone to focus on the purpose of the call instead of the technology of the call. Those transcripts and summaries should reduce the flurry of follow-up emails and messages after calls, which will be welcome.
Once your business gets above a handful of people, communication becomes the key component in your success. Our headcount is north of 150 so we know this firsthand. In the balancing act that is making your business work, a balancing act that includes not just optimizing who you work with but where they work from, video calls are going to be part of your internal comms mix. These new features should help you bring them closer to the quality and effectiveness of in-person meetings.
But we’ll wait and see if it turns out that way.
WFH vs RTO: How and why we transitionedAs more and more businesses require employees to return to the office (RTO) there have been two popular, but cynical, takes on the trend. The first is that the businesses don’t want their offices sitting empty, especially if they own them rather than lease them. The second is that the businesses are control freaks and want to be able to once again monitor their employees every moment.
These takes are not completely wrong. There will always be businesses that make decisions based on dubious reasoning. But for most businesses that have more than a handful of employees, bringing them back to the office can be a matter of survival.
As a company that provides software development services and employs a large and growing team of software developers, we’ve felt the impact of Work From Home. It’s affected our staff and our operations to the point where we are now implementing practices to counteract it.
This article is a discussion about what we experienced during the COVID WFH period and its aftermath, what we’ve learned from it, and how we are changing the way we work to address it.
During the early part of WFH under COVID we experienced minimal impact in our transition away from the office. This was due to all of our staff transitioning directly from the in-office experience where they all knew each other well and were part of the culture.
It was during the later phase of COVID WFH, when everyone had undergone extended isolation and new staff members had joined us, that we started to see the impact on our culture and our mission.
The impact can be summarised in 4 words – Continuity, Clarity, Capability and Growth.
Continuity, in this context, includes maintaining a business’s culture, practices and institutional knowledge while also preserving staff development and advancement and integrating new hires into the business.
Clarity is having a clear view into the operations of the business. Some parts of this view can be found in reports. Other parts are found in meetings. Some parts are found only by in-person interactions and observations.
Capability is being able to execute at the level your business needs. The big challenge of running a business is you never have the perfect person for every role. You need to be able to work with the resources you have. But these resources might limit your options.
Growth is exactly what it is. It might be growth in product or service offerings or growth in size or growth in profitability.
No-one can deny that people like working from home. At the same time, no-one can claim that everyone likes working from home or that everyone is better off working from home or that there are no negative consequences created by WFH strategies.
What we found with our large team was that the people who fared best under work from home were in senior roles, had strong communication skills, and were in positions that required only minimal collaboration or reporting. Senior developers are an example of this kind of role.
On the other hand, we found junior developers were struggling to complete work. This caused them stress and in some cases anxiety. They suffered, their work suffered, and more pressure was put on senior developers to fill in gaps. But it was during lockdown and we all just soldiered on.
When we began bringing employees back to the office we found that many of our junior developers were still just that – junior developers. Despite the amount of time they had been with SoftwareSeni they had made little advancement in their skillset, including not just programming but the soft skills that are part of working within a team, as well as in carving out their own unique career path. We saw a similar slowdown in our mid-tier developers as well.
We use a mix of formal and informal mentoring. Formal mentoring includes activities like shared planning sessions and code reviews. Informal mentoring covers all those little moments of learning and experience transfer that happen in an office. In an office environment it is easy to turn to someone more experienced for help when you get stuck. In an isolated environment like WFH, where finding a free coworker with the right experience and organising a video call and screen sharing is necessary to answer a question, that question might go unanswered, creating delays or resulting in a poorer quality solution.
There is also learning by osmosis that takes place in an office environment. Being present while more experienced coworkers discuss problem solving and strategies for the problems they face is itself an opportunity for learning.
You may be reading these examples and imagining solutions for each of them that don’t require working in an office. Just put a call out on Slack. Invite junior developers to Zoom calls. They look like solutions, but the level of friction in using them and the quality of the experience they provide results in worse results than simply being in the same space together. We know this because we had staff working from home for nearly two years using these “solutions” and things are going so much better now most of the team is back in the office.
Bringing the team into the office is necessary for Continuity: continuity of our team members’ careers, continuity of inhouse experience, and continuity of “institutional knowledge” and our culture.
Without this emphasis on maintaining continuity a business like ours with a team in the hundreds could see a collapse of our capabilities and our quality of execution drop to the point where we can no longer be competitive.
Goodhart’s law states “When a measure becomes a target, it ceases to be a good measure”. This law is about how when given a target, like a KPI, people’s natural tendency is to optimise their actions and find ways to game the system to hit the target.
During Work From Home we found something similar happening in our regular meetings. It wasn’t malicious, it was just the outcome of the team striving for efficiency.
Since, of course, all our meetings were video meetings, we adopted clear agendas for running our regular meetings. We all knew in advance what needed to be discussed and we got through them quickly and cleanly despite the inherent clumsiness of group video calls.
But once we returned to the office it became apparent that what was being reported in the meetings had become a streamlined, tightly focused delivery of the agenda items. Issues of all kinds had developed in the business but they had fallen through the cracks of our carefully planned and cleanly executed meetings.
The issues had been allowed to develop because as everyone was sitting alone in their home offices they weren’t just physically isolated from each other, they were physically isolated from the issues. And almost all the issues were complex and identifying them as issues from a single perspective was unlikely.
You can’t have a report for everything. You can’t have a KPI for every facet of your business. Workflows can’t cover every situation. But there are still countless factors that need to be monitored and dealt with.
They’re different for every business, they’re different for every department, and they’re different for every team. Seal everyone off from each other and you may never be able to connect the dots.
Clarity comes from having multiple views of the same shared experience. Clarity comes from secondary sources, back channels, passing comments, and interactions with the team. This is where recognising problems and solutions begins. From here they end up in reports.
This is where Capability comes into play. Your business needs to be able to run, survive and thrive with the team you have available. What you can do is constrained by the skill and experience of your team.
We found performance under WFH was dependent on the skills of the manager. WFH, and remote work in general, requires a higher level of skill to pull off than normal in-office team management.
It takes time to adjust to the different management style. Again, like our meetings discussed above, reducing all interactions to video calls that need to be scheduled or coordinated changes outcomes.
Simple things that don’t need a meeting to be shared, like job status, hours worked, and so on, are easy to monitor. Addressing them, however, along with other more subtle concerns, like engagement, role clarity, skill acquisition, morale, and work quality, is made more difficult by the inherently confrontational format of a video call.
One of our main services is software development team extensions. Remote teams as a service, basically. You would expect that we would be masters of remote team management. But the teams are remote teams only for our clients. All of our team members, before COVID, worked out of our offices. Part of our service is acting as a secondary level of management or as a team support role for our clients. This was done in-person until the lockdowns. Which meant our team support staff had to learn the new skills required to manage remotely. Some adapted better than others.
Some businesses rely on employees executing in roles with basic skill requirements. Other businesses rely on a small number of highly skilled employees who can execute independently of each other. These businesses can continue to do well under Work From Home or with a fully remote team.
If you’re like us, where your team is growing, where you need employees advancing between roles, and you need the resilience of a team where knowledge is shared instead of siloed, then Work From Home is not the best option.
Growth for a business relies on having a long term vision and the strategies that support it. For SoftwareSeni, our team is the heart of our strategy: growing our team, helping them develop their skills, supporting them as they grow to be important contributors to our company and to our clients.
With people at the centre of our growth and success, Work From Home has been a disadvantage. It didn’t stop us from delivering for our clients, but it did slow us down as an organisation. A big part of that slowing was the result of difficulties being faced by so many of our employees.
In January 2023 we started bringing the team back to the office. With a couple years of remote work experience throughout the organisation, we have been able to approach it with flexibility.
Some roles are able to split time between WFH and RTO, and most roles are able to WFH on the occasion where a stint of independent work aligns with current requirements.
Clarity and morale, which is really the keystone of continuity, were the first and most obvious wins as the bulk of our team returned to the office. No-one would say they enjoy a return to commuting, but the ease of working in a team that is physically present, in terms of efficiency, quality and camaraderie, does help ease the pain.
The WFH period has left us a more flexible team and a more resilient team. This increase in capability is contributing to a new burst of growth as projects and inhouse programs that were planned during COVID and were waiting on our return to the office are now being implemented.
Not every business needs to work from an office. And there are probably very few businesses that need every team member in the office every day of the week.
Looking at your business through the lenses of Clarity, Continuity, Capability and Growth, like we did, provides a clear structure for helping you decide where on the continuum between Work From Home and Return To Office your business needs to place itself.
How are LLMs like ChatGPT going to revolutionize real estate?Here is how AIs like ChatGPT are going to revolutionise real estate: invisibly, by increasing efficiency in the market, and visibly, by making customer services a bit cheaper and higher quality.
AI is already in real estate, and has been for a while, but it’s not AI like ChatGPT. The AI in real estate is a collection of technology and techniques known generically as Machine Learning (ML) and it uses the same basic building blocks, called neural networks, that ChatGPT uses, but with different structures and different arrangements of the building blocks because they have a different purpose to ChatGPT.
Trulia is an example of a proptech company leveraging the power of ML. They’ve used it to build their property recommendation engine, for predicting user engagement, and for identifying property features from photographs.
These are things ChatGPT can’t do. ChatGPT is a Large Language Model (LLM). It’s a multi-layer neural network that has been fed a tremendous amount of text and it is very good at stringing words together into sentences in response to some input text.
It has been fed so much text that it is very good at masquerading as an under-trained lawyer, a sub-par business analyst, a mediocre marketer, a hack journalist, etc, simply by stringing together pieces of the text it has been trained on.
While being very good at generating text, LLMs like ChatGPT aren’t very good at math. You can’t use them to do things like predict user engagement or identify property features from photographs.
What LLMs are good at is bridging the vast gulf between human written text and the kind of numerical inputs ML systems can deal with.
This can be as simple as basic sentiment analysis, turning a review like “This is so not the best hamburger joint in town” into a -1, and a review like “I hate how good their fries are” into a +1 – simple numbers that can be used in a prediction model.
Stepping beyond this, LLMs can be used to turn text into ranks. An LLM can consistently turn, say, countertop descriptions like “mustard yellow formica”, “granite”, “hand-poured, hand-polished concrete” into a range from 1 to 10. The real value in this is that when it sees a new description, like “clear resin featuring embedded seashells”, it can fit it into the same range.
Moving beyond simple ranking, we can combine it with data extraction. Here is an example where an LLM pulls specific data from a police report. It could just as easily be a property listing:
This kind of “encoding” has normally been a job for humans. This drove up the price for the production and updating of useful databases, making them expensive, rare and proprietary. Spending the money and the time to do the encoding and compile a database was a nice way to build a moat.
But now it’s not a moat because anyone who can connect a web scraper (or web scraping service) to an API call to an LLM can fill a database with any information they want in days for next to nothing.
So getting data isn’t going to be much of a problem any more. That’s going to bring efficiency increases. Maybe some parts of the market will eke out a few more percentage points in returns, but only if they can use the data.
Trulia, for example, isn’t running their services on LLMs. LLMs will only be part of the pipeline. The real work and real value will still be done with classic ML models fed with higher quality data encoded by LLMs.
So, that’s the invisible part of AIs impact on real estate. Let’s now look at the likely visible impact.
This visible impact is going to be based on the highly visible features of LLMs – their ability to generate text and respond in a conversational manner.
We’ve already seen too many real estate listing generators that use LLMs. That last link, agently, provides LLM generated property descriptions as one of their less important features.
This is one of the recurring gotchas of building products with LLMs. It is so easy to get text out of an LLM that whatever product idea you have, if it is just producing text then your business model is in danger of becoming another company’s bullet point.
Chat is the big, consumer-facing application of LLMs like ChatGPT, Bing and Bard. With the ability to train LLMs on your business’s documentation via fine-tuning (OpenAI’s page on it) or using embeddings to constrain the information an LLM can work from, it seems we can all turn over our customer enquiries to software, saving us huge sums of money.
If only it were that simple.
The problem with using LLMs to “chat” to your customers is that you need to rely on raw text from the LLM. LLMs don’t have any judgement. They just generate text. The result is that they “hallucinate” – produce wrong or even wildly wrong responses – and there is no way to guard against this except having a human in the loop.
If you are building your customer service chatbot on top of an LLM like GPT-4, even with fine-tuning, the data you supply (chat logs, FAQs, blog articles) will be such a tiny percentage of the LLMs “knowledge” that it will inevitably “hallucinate” when answering questions. It might even insert information from a competitor’s products (that were documented on the internet prior to September 2021).
The other problem is that prompt jailbreaks – strategically written text that tricks the LLM into leaking information or hijacks it for the user’s own purposes – are not a solved problem.
The end result is that LLMs are not predictable enough to be trusted and so are not going to replace staff in customer service roles. Instead, they are going to act as an assistant to them. They’re going to help customer service staff find answers faster, they’re going to help customer service staff provide better answers, and they’re going to help customer service staff manage complex and ongoing interactions.
Crisp wrote up a fantastic deep dive into their efforts to add AI to their customer service. It’s not easy. It’s not cheap. It will get cheaper, but when you look at the steps they had to go through it is hard to imagine it getting easier.
Zendesk, the customer service SaaS, have also found that AI and LLMs are going to augment staff, helping to improve customer experience, instead of replacing them.
So AI chat as it is popularly imagined – replacing customer service staff with a bit of code – is not going to happen any time soon. For anything other than the most basic and pro forma customer enquiries, where an LLM chatbot might be limited to being a high-powered FAQ regurgitation engine, humans will still be in the loop and on the payroll.
That’s the big question. The AI services landscape is already fracturing into service providers who will run LLMs for you, fine-tune them for you, host your data for you, etc. AI tools are generic tools. At the coal face they work just as well for any industry, be it real estate or livestock management. So the shovel sellers are already busy selling shovels.
Proptech startups need to be looking at what untapped data is out there, preferably in text format if you want to jump on the LLM bandwagon. That text could be in paper format. Have you noticed how good OCR has gotten in the last year or two?
The data might already be sitting on your server. A database of transactions is a prediction model waiting to be trained. Prediction gives you the opportunity to optimise and de-risk, two things every industry wants to pay for.
Whether you want to reduce the cost of comms with tenants or streamline nine figure real estate transactions or improve compliance in building maintenance schedules, you need to be asking yourself where you can scrape, collect, licence or buy the data you need.
That, we expect, will be easier to do than hire the data scientists you’re going to need to make it work. Maybe the shovel sellers will fix that for us.
How to harness AI and save your business from the futureGenerative AIs like ChatGPT, Bard and Bing are changing the world faster than we can imagine. So fast that there are now ChatGPT-like AIs that can run on smartphones. So fast that the cost of training a ChatGPT-like AI has dropped from $4.6 million in 2020 to $450k today. And it’s happening so fast that startups are seeing their business model trashed by Google and Microsoft before they can get traction. The speed of change is making people suspect OpenAI is using ChatGPT to speed up their development of new AI and features.
As a startup or a small to medium business generative AI is going to accelerate and empower you. You and your team are going to work smarter and work faster. You’re going to do more with less or grow and do much, much more than you imagine possible.
AI is going to make it possible for a lone founder to do what a medium-sized company does today, and it will allow a medium-sized company to do what right now only a big company can do.
This shift in ability to execute is making people worried that jobs are going to be lost as everyone incorporates AI into their workflows. However, as has been pointed out by many commentators, if your revenue per employee keeps going up as they complete work faster and do more using AI, why would you fire anyone?
How much of a change in productivity will we see? In a draft of a working paper titled “GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models” released on March 27, 2023 by Eloundou et al, the following estimations are made:
“Our findings reveal that around 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted. … The projected effects span all wage levels, with higher-income jobs potentially facing greater exposure to LLM capabilities and LLM-powered software. … Our analysis suggests that, with access to an LLM, about 15% of all worker tasks in the US could be completed significantly faster at the same level of quality. When incorporating software and tooling built on top of LLMs, this share increases to between 47 and 56% of all tasks.“
To help you understand how you can make the most of the changes generative AI is going to bring we’re going to start with the basics and give you a quick background on ChatGPT and how it and other generative AIs work (this includes Microsoft’s Bing and Google’s Bard).
After that we’ll go through the ways you can take advantage of AI and not be left behind.
To paraphrase a random internet commenter – “People would be shocked if they understood how simple the software behind ChatGPT really is”. If you’re technically minded this will show you how to build something similar to ChatGPT in 60 lines of code. It can even load the data used by GPT-2, one of ChatGPT’s predecessors.
ChatGPT was built by OpenAI. It’s a type of Large Language Model (LLM) and part of the class of AIs called “generative AI”. A language model is a computer program designed to “understand” and generate human language (thus “generative AI”). Language models take as input a bunch of text and build statistics based on that text – things like which letter is most likely to appear next to the letter “k” or which word is most likely to come before “banana” – then use those statistics to generate new text on demand.
When a language model is generating text, like in response to a question, at the most basic level it is simply looking at the text, in this case a question, and using the statistics it has generated to choose the word most likely to come next.
A Large Language Model (LLM) is just a language model trained on a large amount of text. It is estimated that the LLM that underlies the initial version ChatGPT, GPT-3, was trained on 300-400 billion words of text taken from the internet and books. That training was, basically, showing it a word from a document, like this article, and showing it the approximately 100-500 words that preceded it in the document (only OpenAI knows the actual number).
So if an LLM was fed this very article, it might be shown the word “human” and also the words “ChatGPT was built by OpenAI … and generate” that led up to it.
It turns out that when an LLM is fed nearly half a trillion words and their preceding text to build statistics with, those statistics capture quite complex and subtle features of language. That isn’t really a surprise. Human language isn’t random. It has a predictable structure, otherwise we couldn’t talk to each other.
It’s not just predictable. There is a lot of repetition. Repetition in the phrases we use, like “how’s the weather”, but also in sentence structure, “The cat sat on the mat. The bat spat on the hat.”. Even document conventions. Imagine how many website privacy policies ChatGPT would have been trained on by using the internet as a source of text.
When you ask ChatGPT a question, the words of your question become the preceding text for the next word. This preceding text is called the “context”. It’s also known as “the prompt”.
Your question, the context, is used by ChatGPT to find the most statistically probable word that would begin the answer.
Let’s say your question is “Why is the sky blue?”. First, imagine how many times that question appears on the internet and in books. ChatGPT has definitely incorporated it many times into its statistics.
“Why is the sky blue?” is a 5 word question, and forms the 5 word context. So what is the 6th word of the context going to be? It’s going to be the word most likely to appear 5 words after “why” in all the text ChatGPT has ever seen, as well as 4 words after “is” and 3 words after “the” and 2 words after “sky” and 1 word after “blue”.
(The question mark is also important, but we’re ignoring that for this simple explanation)
That word, the most probable word to fit all those conditions at the same time might be “The”. It’s a common way to start an answer. Now our context has 6 words:
“Why is the sky blue? The”
And the process is repeated:
“Why is the sky blue? The sky”
and repeated:
“Why is the sky blue? The sky is”
“Why is the sky blue? The sky is blue”
“Why is the sky blue? The sky is blue because”
The context grows one word at a time until the answer is completed. ChatGPT has learned what a complete answer looks like from all the text it has been fed (plus some extra training provided by OpenAI).
You may have heard about “prompts” and “prompt engineering”. Because every word in the context has an effect on finding the next most probable word for the answer, every word you include will act to constrain or shape the possibilities for the next word. For example, here is a prompt and answer from ChatGPT:

Add a few related words and the answer shifts in a predictable way:

This, in a nutshell, is what prompt engineering is about. You are trying to choose the best words to use to constrain ChatGPT’s output to the type of content you are interested in. Take a common prompt like this:
“Imagine you are an expert copywriter and digital marketer with extensive experience in crafting engaging and persuasive ad copy for Facebook ads. Your goal is to create captivating ad copy for promoting a specific product or service”
Don’t be fooled into thinking that there is some kind of software brain on a server in a giant data centre in the Pacific Northwest imagining it is an expert copywriter. Instead, think of all the websites run by copywriters and digital marketers and their blog articles where they discuss Facebook ads or writing engaging copy.
You can read OpenAI’s guide to prompt engineering here, and this guide to prompt engineering goes even deeper.
It is without doubt amazing that ChatGPT does what it does. It is also amazing that the process is so deceptively simple – looking at which words come before other words. But it takes nearly half a trillion words of human-to-human communication to provide the data to make it happen.
This is a simplification and leaves out important details, but what you need to know is that generative AIs like ChatGPT, Bard and Bing are always only choosing the next most likely word to add to a reply. It’s a mechanical process prone to producing false information. Added to this unavoidable feature of blind, probabilistic output, to make generative AI output more “creative” or “interesting” actual randomness is added to their choices.
Generative AIs have been trained on enough logic and reasoning examples to mimic how we use language to communicate logic and reason. But they are, in the end, text production programs. There is no logic or reasoning as humans use it involved in producing that text. Even if the text contains a logical argument. So always review carefully what text they produce for you.
As one person who spends a lot of time producing text said:

Having said all that, there is an argument that generative AIs, in particular LLMs, might be doing more than just producing text. This argument says they might be building a model of the world and the features of the world as they build their billions of statistics about text. And that these models might be what is making generative AIs so powerful.
Giving some partial support for this argument is the number of “emergent” abilities generative AI is showing. The backers of this argument say these are proof there is more going on than picking which word should come next. The emergent abilities are quite specialised. Such as naming geometric shapes. It’s not like it is teaching itself how to pilot a plane. You can find a list of the emergent abilities documented so far in this article. Be warned, they aren’t very impressive.
For these next sections we’re going to mostly refer to ChatGPT, but it applies to any publicly available generative AI including Google Bard and Microsoft Bing.
ChatGPT has ingested more pro forma correspondence, business documentation, business books, corporate communications, RFPs, agreements, contracts, pitch decks, etc, than you can possibly imagine.
This makes it the ultimate tool for producing the first draft of just about any document, including replies to emails, white papers, case studies, grant proposals, RFPs, etc. It can also serve as an editor, helping to turn your rambling sketch of an email or an article introduction or anything into clear, coherent sentences and paragraphs you can further revise.
With ChatGPT you never have to delay writing an email or starting a document because you don’t know where to begin. Or because you’re completely out of your comfort zone, in over your head, or have no idea what you’re supposed to say. ChatGPT has seen it all and can help you write whatever you need to write.
Now, this does come at a slight penalty, which for most things will not matter now, and in the long run will probably be welcome: ChatGPT has a distinctive “voice” that if you’re familiar with it will be noticeable.
Also, because ChatGPT is always choosing the most probable word each and every time, what it outputs can be quite boring or cliché. Sometimes this is a good thing. Clear communication is based on convention. But don’t expect creativity or original ideas.
ChatGPT is perfect for streamlining your production of all those necessary business communications that humans use to keep things running. By adopting ChatGPT as part of your process, you will be able to execute on these faster and at a higher level, leaving you more time for the work that really moves the needle.
If you’re not sure how to command ChatGPT to produce what you need, this prompt engineering guide will help.
Of course everyone else you interact with will be doing the same thing. So expect the speed of business to increase and hope all your third parties prompt ChatGPT to keep their emails brief.
As Sam Altman, the CEO of OpenAI tweeted:

In this section we are going to focus on data and text related tools. The AI-generated image and video space is also huge: Dall-e, Midjourney, Adobe Firefly, Stable Diffusion, etc. There are hundreds of them, but for most businesses image and video is a part of marketing rather than their core product, so we’re sticking with the most common use cases.
There are already lots of startups offering AI-powered tools of every variety. But they’re about to face the twin behemoths of Google WorkSpace and Microsoft Teams. Both have recently announced the integration of AI assistants into their offerings (Google’s, Microsoft’s).
How will these work? Imagine ChatGPT knows everything about your business. It has memorised every report, every spreadsheet, every presentation, and every email. You can ask it for numbers or summaries or ask it to create presentations or documents.
Some of these features will help reduce the time you spend on the boring necessities of keeping your business running. Others, like integration with spreadsheets, will help you find answers, create forecasts and analyse trends faster and easier than you could before. It’s even possible you can’t even do regular forecasts because your team has no-one with the expertise. That’s going to change.
Again, this is going to give you more time to spend on doing the things that really make a difference to your business – planning, strategy, talking to customers, building relationships with partners. Unless you make the mistake of burying yourself under all the reports it will be so easy to create. But you can always ask the system to summarise them for you.
At the time of writing there isn’t even a beta program for Microsoft and Google’s AI-powered offerings. They haven’t provided a rough date when they will be generally available. On the other hand, lots of startups are developing services based on OpenAI’s APIs, using the same LLM behind ChatGPT to create new products.
The site Super Tools has a database ]AI-based startups. You might be able to find some products in there that can help you.
If we continue to focus on text (Super Tools includes image, audio and video tools as well) these products fall into two main categories of functionality: content generation and search.
Content generation covers things like chatbots, writing assistants and coding assistants. Some of these services are nothing more than a website that adds a detailed prompt (or context) to be sent along with your own instructions/queries to ChatGPT’s backend and the response is then passed back to you.
An example of this is VenturusAI. At least it’s free. Think of these services as a lightly tailored version of the standard ChatGPT experience already provided by OpenAI. This might be obscured by design or presentation. A few hours fiddling with a prompt in OpenAI’s ChatGPT interface might get you the same result without the cost of another SAAS subscription.
If the output is short enough, and like VenturusAI they’re nice enough to show you example results, you can just paste their examples into ChatGPT and ask it to duplicate the result but for your own inputs.
Content generation is already impacting programming, legal services, not to mention copywriting of all kinds, including real estate and catalogue listings.
The impact of content generation tools is already being felt. According to Microsoft, for projects using their Github Copilot code generation assistant, 40% of the code in those projects is now AI generated. Given that code probably took a fifth or a tenth of the time it would take a normal programmer to write it, the productivity increase is enormous.
Search is just what it sounds like, but imagine a search engine that’s smarter than Google and can respond to your search request with exactly the information you need written in a way that’s easy to understand.
Dedicated search tools are springing up based on OpenAI’s APIs. Some target specific use cases, like Elicit for searching scientific papers, others, like Libraria, are more general – upload any documents you want and it’ll index them and give you a “virtual assistant” to use as a chat interface to query them.
There is no reason you can’t use OpenAI’s APIs yourself. They offer methods to fine-tune a model and to create embeddings. You’ll need a programmer to do this. Or, if you’re feeling brave and/or patient, you can ask ChatGPT to help you build a solution.
Fine-tuning uses hundreds of prompt-response pairs (which you supply) in order to train an LLM to do things like answer chat queries based on information you care about. For example, you may get the question-response pairs from transcriptions of customer service enquiries. You upload them to OpenAI. It uses them to create a new, specialised version of one of their base models that is stored and runs on their servers. Once it is built you use the API to pass chat-based enquiries to your dedicated model and get responses back in return.
If you’re clever, some of these responses contain a message signalling that a human is needed to deal with the enquiry and your chat system can make that happen.
Embeddings are used in querying across large amounts of text. Imagine asking normal language questions about information hidden in your company’s folder of reports and getting back answers.
For example, if you have hundreds of PDFs with details about the different models of widget you produce, you can create embeddings for all the documents, and then you will be able to ask things like “Which widget is best for sub-zero environments?” or “Which widget is green and 2 metres long?”. Even better, your customers will be able to ask those questions themselves.
These next few paragraphs give a basic information on embeddings and how they are used. You might want to skip it the first time through.
Embeddings are basically a list of numbers. An embedding can be thought of as an address in “idea space”. Instead of having 4 dimensions like the space we live in, this “idea space” has over 1500. Every word or chunk of text can have a unique embedding generated for it by the LLM. Texts that are conceptually close together will have embeddings that are close together (based on a distance algorithm that works for 1500 dimensions).
For example, the embeddings for “apple” and “orange” will be close together because they are both fruit. But the embeddings for “mandarin” and “orange” will be even closer together because they are both citrus fruits.
Surprisingly, this also works when you get the embeddings for hundreds of words of text.
Once you have embeddings for every chunk in every document you want to search stored in a database that can do those multi-dimensional distance calculations, like pinecone, or FAISS, you’re ready to do the actual search.
This is a bit clever. To do the search, you take the user’s query, and with a selection of pieces of your documents you send it to your LLM, like ChatGPT, to generate an answer that is probably not completely correct but is close.
Then you get the embedding for that answer and use that to search your database for the chunks whose embeddings are closest in distance to it. You can then either present the pieces of document to your user, or send the question and those chunks (limited to the size of the allowed context), to ChatGPT for it to generate a properly structured answer.
This article goes into more details and strategies on using embeddings.
If you’re more technically minded, you can connect ChatGPT to any number of tools using a library called Langchain. It cleverly uses prompts to direct ChatGPT to output calls to external services, like calculators, databases, web searches, etc, and Langchain handles call the service, collecting the results and then adding them to the current conversation with ChatGPT.
This pre-dates and is similar to OpenAI’s plugins, but it’s more versatile and can be tailored to your specific needs.
Using LLMs to create autonomous agents has grabbed a lot of recent attention. By integrating a generative AI like ChatGPT into a system that can do things like search the web, run commands in a terminal, post to social media and other actions that can be driven by software, you can build a system that can make simple plans and execute them. Kind of.
These systems, like AutoGPT and BabyAGI work by using a special prompt (that you can see an example of here) that tells ChatGPT what it is to do and includes a list of external commands it can call. It also tells it to only output information in JSON format instead of human readable text – so it can be easily read by other programs.
AutoGPT feeds the prompt to ChatGPT and collects its response in JSON format. It executes any commands it finds in the response and incorporates the results from those to create the next prompt to feed to ChatGPT.
It uses ChatGPT’s context, which is about 3000 words, to create a short term memory for ChatGPT that can hold the goal you’ve given it, the remaining steps in its “plan” and any intermediate results it needs. This makes it somewhat capable of devising and executing short plans. We use the word “somewhat” because it needs constant monitoring and errors can cause it to go off track.
The short term memory can be extended using embeddings as we discussed above. And there is an active community working hard to make the LLM agents more robust and more capable. But for now, you may be able to use an agent to automate a simple workflow. It is particularly good at compiling and summarising information from the internet. Just be sure to do lots of testing and don’t make it an essential part of your infrastructure.
Because the OpenAI API returns JSON for your requests, you can access it from all the best no-code/low code app builders which support third party APIs.
If you don’t need apps but just want to integrate generative AI into your workflows, Zapier and Make now support OpenAI in their integrations. You can use it to automatically draft emails or generate customer service tickets based on incoming emails. Anywhere in your workflows where a human has been needed to make a basic planning or routing decision is a candidate for being automated now.
Using the same tools and techniques you would use to build inhouse AI tools you can build a product.
But that’s just the first level. Beyond leveraging OpenAI’s APIs you can use services like Cerebras to build and train your own model.
Training your own model might be out of reach, but fine-tuning an existing model might be all you need. Fine-tuning a model for a specific domain already has a name – “models as a service”. These models can help users in a particular domain do everything from fix their spelling to estimate the cost of repairs to design new molecules. Of course your use case will dictate the model you fine-tune which will impact your costs.
The power of having your own fine-tuned model that users interact with is that once you have users you now have a source for even more training data, creating an ongoing cycle of fine-tuning and model performance improvement that can build a moat for you in your market.
We hope this article has given you the understanding, inspiration and links that you need to get started using generative AI in your business.
Start small, use ChatGPT to draft a few emails (but double check them). Browse Super Tools and see if there are any tools on there that might address one of your workflow or business process pain points.
Small steps and strategic integration of generative AI is key to taking advantage of this huge technological leap forward. Start today and see how fast and far it can take you.