Generative AIs like ChatGPT, Bard and Bing are changing the world faster than we can imagine. So fast that there are now ChatGPT-like AIs that can run on smartphones. So fast that the cost of training a ChatGPT-like AI has dropped from $4.6 million in 2020 to $450k today. And it’s happening so fast that startups are seeing their business model trashed by Google and Microsoft before they can get traction. The speed of change is making people suspect OpenAI is using ChatGPT to speed up their development of new AI and features.
As a startup or a small to medium business generative AI is going to accelerate and empower you. You and your team are going to work smarter and work faster. You’re going to do more with less or grow and do much, much more than you imagine possible.
AI is going to make it possible for a lone founder to do what a medium-sized company does today, and it will allow a medium-sized company to do what right now only a big company can do.
This shift in ability to execute is making people worried that jobs are going to be lost as everyone incorporates AI into their workflows. However, as has been pointed out by many commentators, if your revenue per employee keeps going up as they complete work faster and do more using AI, why would you fire anyone?
How much of a change in productivity will we see? In a draft of a working paper titled “GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models” released on March 27, 2023 by Eloundou et al, the following estimations are made:
“Our findings reveal that around 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted. … The projected effects span all wage levels, with higher-income jobs potentially facing greater exposure to LLM capabilities and LLM-powered software. … Our analysis suggests that, with access to an LLM, about 15% of all worker tasks in the US could be completed significantly faster at the same level of quality. When incorporating software and tooling built on top of LLMs, this share increases to between 47 and 56% of all tasks.“
To help you understand how you can make the most of the changes generative AI is going to bring we’re going to start with the basics and give you a quick background on ChatGPT and how it and other generative AIs work (this includes Microsoft’s Bing and Google’s Bard).
After that we’ll go through the ways you can take advantage of AI and not be left behind.
To paraphrase a random internet commenter – “People would be shocked if they understood how simple the software behind ChatGPT really is”. If you’re technically minded this will show you how to build something similar to ChatGPT in 60 lines of code. It can even load the data used by GPT-2, one of ChatGPT’s predecessors.
ChatGPT was built by OpenAI. It’s a type of Large Language Model (LLM) and part of the class of AIs called “generative AI”. A language model is a computer program designed to “understand” and generate human language (thus “generative AI”). Language models take as input a bunch of text and build statistics based on that text – things like which letter is most likely to appear next to the letter “k” or which word is most likely to come before “banana” – then use those statistics to generate new text on demand.
When a language model is generating text, like in response to a question, at the most basic level it is simply looking at the text, in this case a question, and using the statistics it has generated to choose the word most likely to come next.
A Large Language Model (LLM) is just a language model trained on a large amount of text. It is estimated that the LLM that underlies the initial version ChatGPT, GPT-3, was trained on 300-400 billion words of text taken from the internet and books. That training was, basically, showing it a word from a document, like this article, and showing it the approximately 100-500 words that preceded it in the document (only OpenAI knows the actual number).
So if an LLM was fed this very article, it might be shown the word “human” and also the words “ChatGPT was built by OpenAI … and generate” that led up to it.
It turns out that when an LLM is fed nearly half a trillion words and their preceding text to build statistics with, those statistics capture quite complex and subtle features of language. That isn’t really a surprise. Human language isn’t random. It has a predictable structure, otherwise we couldn’t talk to each other.
It’s not just predictable. There is a lot of repetition. Repetition in the phrases we use, like “how’s the weather”, but also in sentence structure, “The cat sat on the mat. The bat spat on the hat.”. Even document conventions. Imagine how many website privacy policies ChatGPT would have been trained on by using the internet as a source of text.
When you ask ChatGPT a question, the words of your question become the preceding text for the next word. This preceding text is called the “context”. It’s also known as “the prompt”.
Your question, the context, is used by ChatGPT to find the most statistically probable word that would begin the answer.
Let’s say your question is “Why is the sky blue?”. First, imagine how many times that question appears on the internet and in books. ChatGPT has definitely incorporated it many times into its statistics.
“Why is the sky blue?” is a 5 word question, and forms the 5 word context. So what is the 6th word of the context going to be? It’s going to be the word most likely to appear 5 words after “why” in all the text ChatGPT has ever seen, as well as 4 words after “is” and 3 words after “the” and 2 words after “sky” and 1 word after “blue”.
(The question mark is also important, but we’re ignoring that for this simple explanation)
That word, the most probable word to fit all those conditions at the same time might be “The”. It’s a common way to start an answer. Now our context has 6 words:
“Why is the sky blue? The”
And the process is repeated:
“Why is the sky blue? The sky”
and repeated:
“Why is the sky blue? The sky is”
“Why is the sky blue? The sky is blue”
“Why is the sky blue? The sky is blue because”
The context grows one word at a time until the answer is completed. ChatGPT has learned what a complete answer looks like from all the text it has been fed (plus some extra training provided by OpenAI).
You may have heard about “prompts” and “prompt engineering”. Because every word in the context has an effect on finding the next most probable word for the answer, every word you include will act to constrain or shape the possibilities for the next word. For example, here is a prompt and answer from ChatGPT:
Add a few related words and the answer shifts in a predictable way:
This, in a nutshell, is what prompt engineering is about. You are trying to choose the best words to use to constrain ChatGPT’s output to the type of content you are interested in. Take a common prompt like this:
“Imagine you are an expert copywriter and digital marketer with extensive experience in crafting engaging and persuasive ad copy for Facebook ads. Your goal is to create captivating ad copy for promoting a specific product or service”
Don’t be fooled into thinking that there is some kind of software brain on a server in a giant data centre in the Pacific Northwest imagining it is an expert copywriter. Instead, think of all the websites run by copywriters and digital marketers and their blog articles where they discuss Facebook ads or writing engaging copy.
You can read OpenAI’s guide to prompt engineering here, and this guide to prompt engineering goes even deeper.
It is without doubt amazing that ChatGPT does what it does. It is also amazing that the process is so deceptively simple – looking at which words come before other words. But it takes nearly half a trillion words of human-to-human communication to provide the data to make it happen.
This is a simplification and leaves out important details, but what you need to know is that generative AIs like ChatGPT, Bard and Bing are always only choosing the next most likely word to add to a reply. It’s a mechanical process prone to producing false information. Added to this unavoidable feature of blind, probabilistic output, to make generative AI output more “creative” or “interesting” actual randomness is added to their choices.
Generative AIs have been trained on enough logic and reasoning examples to mimic how we use language to communicate logic and reason. But they are, in the end, text production programs. There is no logic or reasoning as humans use it involved in producing that text. Even if the text contains a logical argument. So always review carefully what text they produce for you.
As one person who spends a lot of time producing text said:
Having said all that, there is an argument that generative AIs, in particular LLMs, might be doing more than just producing text. This argument says they might be building a model of the world and the features of the world as they build their billions of statistics about text. And that these models might be what is making generative AIs so powerful.
Giving some partial support for this argument is the number of “emergent” abilities generative AI is showing. The backers of this argument say these are proof there is more going on than picking which word should come next. The emergent abilities are quite specialised. Such as naming geometric shapes. It’s not like it is teaching itself how to pilot a plane. You can find a list of the emergent abilities documented so far in this article. Be warned, they aren’t very impressive.
For these next sections we’re going to mostly refer to ChatGPT, but it applies to any publicly available generative AI including Google Bard and Microsoft Bing.
ChatGPT has ingested more pro forma correspondence, business documentation, business books, corporate communications, RFPs, agreements, contracts, pitch decks, etc, than you can possibly imagine.
This makes it the ultimate tool for producing the first draft of just about any document, including replies to emails, white papers, case studies, grant proposals, RFPs, etc. It can also serve as an editor, helping to turn your rambling sketch of an email or an article introduction or anything into clear, coherent sentences and paragraphs you can further revise.
With ChatGPT you never have to delay writing an email or starting a document because you don’t know where to begin. Or because you’re completely out of your comfort zone, in over your head, or have no idea what you’re supposed to say. ChatGPT has seen it all and can help you write whatever you need to write.
Now, this does come at a slight penalty, which for most things will not matter now, and in the long run will probably be welcome: ChatGPT has a distinctive “voice” that if you’re familiar with it will be noticeable.
Also, because ChatGPT is always choosing the most probable word each and every time, what it outputs can be quite boring or cliché. Sometimes this is a good thing. Clear communication is based on convention. But don’t expect creativity or original ideas.
ChatGPT is perfect for streamlining your production of all those necessary business communications that humans use to keep things running. By adopting ChatGPT as part of your process, you will be able to execute on these faster and at a higher level, leaving you more time for the work that really moves the needle.
If you’re not sure how to command ChatGPT to produce what you need, this prompt engineering guide will help.
Of course everyone else you interact with will be doing the same thing. So expect the speed of business to increase and hope all your third parties prompt ChatGPT to keep their emails brief.
As Sam Altman, the CEO of OpenAI tweeted:
In this section we are going to focus on data and text related tools. The AI-generated image and video space is also huge: Dall-e, Midjourney, Adobe Firefly, Stable Diffusion, etc. There are hundreds of them, but for most businesses image and video is a part of marketing rather than their core product, so we’re sticking with the most common use cases.
There are already lots of startups offering AI-powered tools of every variety. But they’re about to face the twin behemoths of Google WorkSpace and Microsoft Teams. Both have recently announced the integration of AI assistants into their offerings (Google’s, Microsoft’s).
How will these work? Imagine ChatGPT knows everything about your business. It has memorised every report, every spreadsheet, every presentation, and every email. You can ask it for numbers or summaries or ask it to create presentations or documents.
Some of these features will help reduce the time you spend on the boring necessities of keeping your business running. Others, like integration with spreadsheets, will help you find answers, create forecasts and analyse trends faster and easier than you could before. It’s even possible you can’t even do regular forecasts because your team has no-one with the expertise. That’s going to change.
Again, this is going to give you more time to spend on doing the things that really make a difference to your business – planning, strategy, talking to customers, building relationships with partners. Unless you make the mistake of burying yourself under all the reports it will be so easy to create. But you can always ask the system to summarise them for you.
At the time of writing there isn’t even a beta program for Microsoft and Google’s AI-powered offerings. They haven’t provided a rough date when they will be generally available. On the other hand, lots of startups are developing services based on OpenAI’s APIs, using the same LLM behind ChatGPT to create new products.
The site Super Tools has a database ]AI-based startups. You might be able to find some products in there that can help you.
If we continue to focus on text (Super Tools includes image, audio and video tools as well) these products fall into two main categories of functionality: content generation and search.
Content generation covers things like chatbots, writing assistants and coding assistants. Some of these services are nothing more than a website that adds a detailed prompt (or context) to be sent along with your own instructions/queries to ChatGPT’s backend and the response is then passed back to you.
An example of this is VenturusAI. At least it’s free. Think of these services as a lightly tailored version of the standard ChatGPT experience already provided by OpenAI. This might be obscured by design or presentation. A few hours fiddling with a prompt in OpenAI’s ChatGPT interface might get you the same result without the cost of another SAAS subscription.
If the output is short enough, and like VenturusAI they’re nice enough to show you example results, you can just paste their examples into ChatGPT and ask it to duplicate the result but for your own inputs.
Content generation is already impacting programming, legal services, not to mention copywriting of all kinds, including real estate and catalogue listings.
The impact of content generation tools is already being felt. According to Microsoft, for projects using their Github Copilot code generation assistant, 40% of the code in those projects is now AI generated. Given that code probably took a fifth or a tenth of the time it would take a normal programmer to write it, the productivity increase is enormous.
Search is just what it sounds like, but imagine a search engine that’s smarter than Google and can respond to your search request with exactly the information you need written in a way that’s easy to understand.
Dedicated search tools are springing up based on OpenAI’s APIs. Some target specific use cases, like Elicit for searching scientific papers, others, like Libraria, are more general – upload any documents you want and it’ll index them and give you a “virtual assistant” to use as a chat interface to query them.
There is no reason you can’t use OpenAI’s APIs yourself. They offer methods to fine-tune a model and to create embeddings. You’ll need a programmer to do this. Or, if you’re feeling brave and/or patient, you can ask ChatGPT to help you build a solution.
Fine-tuning uses hundreds of prompt-response pairs (which you supply) in order to train an LLM to do things like answer chat queries based on information you care about. For example, you may get the question-response pairs from transcriptions of customer service enquiries. You upload them to OpenAI. It uses them to create a new, specialised version of one of their base models that is stored and runs on their servers. Once it is built you use the API to pass chat-based enquiries to your dedicated model and get responses back in return.
If you’re clever, some of these responses contain a message signalling that a human is needed to deal with the enquiry and your chat system can make that happen.
Embeddings are used in querying across large amounts of text. Imagine asking normal language questions about information hidden in your company’s folder of reports and getting back answers.
For example, if you have hundreds of PDFs with details about the different models of widget you produce, you can create embeddings for all the documents, and then you will be able to ask things like “Which widget is best for sub-zero environments?” or “Which widget is green and 2 metres long?”. Even better, your customers will be able to ask those questions themselves.
These next few paragraphs give a basic information on embeddings and how they are used. You might want to skip it the first time through.
Embeddings are basically a list of numbers. An embedding can be thought of as an address in “idea space”. Instead of having 4 dimensions like the space we live in, this “idea space” has over 1500. Every word or chunk of text can have a unique embedding generated for it by the LLM. Texts that are conceptually close together will have embeddings that are close together (based on a distance algorithm that works for 1500 dimensions).
For example, the embeddings for “apple” and “orange” will be close together because they are both fruit. But the embeddings for “mandarin” and “orange” will be even closer together because they are both citrus fruits.
Surprisingly, this also works when you get the embeddings for hundreds of words of text.
Once you have embeddings for every chunk in every document you want to search stored in a database that can do those multi-dimensional distance calculations, like pinecone, or FAISS, you’re ready to do the actual search.
This is a bit clever. To do the search, you take the user’s query, and with a selection of pieces of your documents you send it to your LLM, like ChatGPT, to generate an answer that is probably not completely correct but is close.
Then you get the embedding for that answer and use that to search your database for the chunks whose embeddings are closest in distance to it. You can then either present the pieces of document to your user, or send the question and those chunks (limited to the size of the allowed context), to ChatGPT for it to generate a properly structured answer.
This article goes into more details and strategies on using embeddings.
If you’re more technically minded, you can connect ChatGPT to any number of tools using a library called Langchain. It cleverly uses prompts to direct ChatGPT to output calls to external services, like calculators, databases, web searches, etc, and Langchain handles call the service, collecting the results and then adding them to the current conversation with ChatGPT.
This pre-dates and is similar to OpenAI’s plugins, but it’s more versatile and can be tailored to your specific needs.
Using LLMs to create autonomous agents has grabbed a lot of recent attention. By integrating a generative AI like ChatGPT into a system that can do things like search the web, run commands in a terminal, post to social media and other actions that can be driven by software, you can build a system that can make simple plans and execute them. Kind of.
These systems, like AutoGPT and BabyAGI work by using a special prompt (that you can see an example of here) that tells ChatGPT what it is to do and includes a list of external commands it can call. It also tells it to only output information in JSON format instead of human readable text – so it can be easily read by other programs.
AutoGPT feeds the prompt to ChatGPT and collects its response in JSON format. It executes any commands it finds in the response and incorporates the results from those to create the next prompt to feed to ChatGPT.
It uses ChatGPT’s context, which is about 3000 words, to create a short term memory for ChatGPT that can hold the goal you’ve given it, the remaining steps in its “plan” and any intermediate results it needs. This makes it somewhat capable of devising and executing short plans. We use the word “somewhat” because it needs constant monitoring and errors can cause it to go off track.
The short term memory can be extended using embeddings as we discussed above. And there is an active community working hard to make the LLM agents more robust and more capable. But for now, you may be able to use an agent to automate a simple workflow. It is particularly good at compiling and summarising information from the internet. Just be sure to do lots of testing and don’t make it an essential part of your infrastructure.
Because the OpenAI API returns JSON for your requests, you can access it from all the best no-code/low code app builders which support third party APIs.
If you don’t need apps but just want to integrate generative AI into your workflows, Zapier and Make now support OpenAI in their integrations. You can use it to automatically draft emails or generate customer service tickets based on incoming emails. Anywhere in your workflows where a human has been needed to make a basic planning or routing decision is a candidate for being automated now.
Using the same tools and techniques you would use to build inhouse AI tools you can build a product.
But that’s just the first level. Beyond leveraging OpenAI’s APIs you can use services like Cerebras to build and train your own model.
Training your own model might be out of reach, but fine-tuning an existing model might be all you need. Fine-tuning a model for a specific domain already has a name – “models as a service”. These models can help users in a particular domain do everything from fix their spelling to estimate the cost of repairs to design new molecules. Of course your use case will dictate the model you fine-tune which will impact your costs.
The power of having your own fine-tuned model that users interact with is that once you have users you now have a source for even more training data, creating an ongoing cycle of fine-tuning and model performance improvement that can build a moat for you in your market.
We hope this article has given you the understanding, inspiration and links that you need to get started using generative AI in your business.
Start small, use ChatGPT to draft a few emails (but double check them). Browse Super Tools and see if there are any tools on there that might address one of your workflow or business process pain points.
Small steps and strategic integration of generative AI is key to taking advantage of this huge technological leap forward. Start today and see how fast and far it can take you.
Team extension, extended team & out-sourcing FAQ
These 4 terms all refer to using a service provider to source and contract remote workers on a temporary (though possibly long term) basis.
There is one stand-out – staff augmentation can be used in a more general sense. You can use staff augmentation to refer to placing people in multiple roles throughout a business. But extended team, dedicated team, and team extension refer specifically to adding people to a particular team or even a particular team project.
Off-shoring is a general term that refers to using workers of a service provider in another country to fill roles or perform role-related tasks, within your business.
Near-shoring is similar to off-shoring but it implies that the workers are located in a nearby country or time zone to reduce the management and collaboration difficulties that working across widely different time zones can create.
Out-sourcing is when a project or service that would traditionally be executed in-house is handled completely by an external service provider. The service provider is normally located off-shore in an attempt to reduce costs.
Extended team, dedicated team or team extension is when a project team is expanded by the hiring of remote team members through a team extension provider. The extended team members working remotely report to the same management as the in-house team, they work side-by-side with the in-house team on any projects, and participate in all meetings, but all their necessary resources – computers, office space, HR, etc – are supplied by the team extension provider.
Under the team extension model you are responsible for managing your own project even though the work is being done by external contractors. Under an out-sourcing model the project management would also be handled externally.
The benefits of the team extension model are that you have complete control over the project and complete visibility into how it is progressing. You can spot, diagnose and fix any problems as soon as they occur.
The drawback of the team extension model is that you need a competent project manager inhouse in order to see the project to successful completion.
The extended team model, or the extended development team model, is just the team extension model by another name. You will see both used online. Which one an author favours depends mostly on which region they’re in.
The dedicated team model is yet another term for the team extension model. It is used to make explicit that the team members you contract through your service provider are focused purely on your project. While this is the default whether you call it an extended team model or team extension model, it does serve to differentiate it from out-sourcing, where you have no control over team continuity.
The core team is made up of inhouse employees who established the project and were solely responsible for moving the project forward before a team extension is added to the effort.
The core team holds the business and domain expertise that the project relies on. They work with the team extension members under a project or product manager to complete the project and serve as a source of guidance and deep knowledge for the extended team.
A team extension creates three main advantages for a business. These are particularly beneficial when the business is following the extended development team model for software based products.
The three main advantages of a team extension are:
Unlike in out-sourcing, the management of the remote members of an extended team is handled by the business contracting them. This requires you to have an inhouse project manager experienced in dealing with remote team members.
Post-Covid this is now the status quo. But if a business has pursued a back-to-the-office strategy for their developers, care needs to be taken that the remote members of the extended team are fully integrated into the day-to-day operations and culture of the business and especially for the project they are working on.
In the unlikely event that a business believes an extended team member isn’t performing well, this challenge is resolved in a similar manner to how it would be resolved for an inhouse employee.
The situation is better than that with a standard remote employee, because the extended team member is also under local management and monitoring by the service provider.
If the problems turn out to be unresolvable, it is quick and easy to select, vet, and contract a new extended team member from the service provider’s talent pool, with extra assistance from the service provider for the handover.
An extended team can be contracted to work directly on a project. This can be in order to access expertise to develop certain features, or to shorten timelines for project completion.
Outside of software development on a business’s product, an extended team can be contracted to provide support services, such as devops for an existing team or project, and to keep important and complex applications online and available to customers.
Moving beyond software, an extended team can provide design and UX expertise early in a project, as well as ongoing customer service support and technical support once a project is online.
The big challenges in a team extension are simply variants of the same challenges businesses face with any employee. Onboarding is critical.
Having a manager or mentor available to chat or video call in order to quickly resolve the kinds of problems that show up in the early stages of employment will make onboarding easier and get members of the team extension working productively as quickly as possible.
The other major challenge is integrating the team extension staff with the inhouse team. But this can be handled by simply holding meetings, stand-ups, code reviews, etc, via video so that everyone can participate on an equal footing.
If you want more tips on managing an extended development team read our article The simple secrets to making your extended team work.
Right here. SoftwareSeni is Sydney-based and our main focus is offering extended team services to Australian startups and businesses that think like startups.
This focus is why our talent pool is based in Indonesia. It provides an extensive time zone overlap with Australia that we find makes working with an extended development team so much more effective, both in terms of quality of communication and responsiveness.
Our team of developers (as well as design, UX, devops, and customer service) is based in Yogyakarta. The city is a major learning centre with a large, well-established tech culture. This has allowed us to pick and choose our team members to build the deep expertise that will benefit any project.
We can provide expertise at scales from a part-time single developer up to a team of dozens and for any stage of product development, from ideation to maintenance mode.
If you’re outside of Australia and have strong remote team management capabilities, you might still find the quality and range of our tech talent worth the larger time zone difference.
So if you’re looking to increase your headcount and are searching out tech talent to deliver the outcomes your business needs, get in touch.
Extended Team Model – all you need to know to build the dev team your business needsThe extended team model could be the best tactic to get your startup into the market or drive your business ahead of the competition. Using the extended team model can help you grow your capabilities without eating your margin. And it is the best way to respond quickly to market changes and moves by the competition. Let’s dive into the details behind the Extended Team Model.
The Extended Team Model is an organisational structure where the core team that provides deep institutional and product knowledge is based in-house and works closely with one or more developers who work remotely.
The size of an extended team, and here we are talking mainly about extended software development teams, depends on the needs of the business.
A startup might have a core team of a single Product Manager and the entire development team is an extended team. A corporate business unit might need expertise they can’t access in-house. A business with an established online presence might need some regular devops hours to keep their website and backend working smoothly.
The extended team model lets businesses scale their hiring to exactly match their needs. As extended software development teams are assembled out of a single provider’s talent pool, that hiring can happen quickly – sometimes in days, often not longer than two weeks, rather than the months it can take to attract, vet, and interview team members with the normal hiring process.
The big differences between the extended team model and outsourcing are control and integration.
Outsourcing works like a black box. You feed in specifications and you get code or product out. There are deliverables and meetings, but you have zero insight into who is doing the coding, how focused they are on your particular project or even their level of expertise.
With the extended team model you are involved in team selection. You know who will be part of your extended team and you will know, either through testing or interviews, their ability level.
Your extended team members, if they are full time, will be devoted only to your project. Unlike an outsourced developer that you will never contact directly, extended development team members are integrated into your team. They participate in scrums, they work directly on your codebase using the same tools as the rest of your team. Your goals are their goals.
The extended team model differs from a remote team by being more consistent, more flexible and more reliable.
Here the key feature of an extended team is that every extended team member is part of the same talent pool. They come from a single extended team member provider, such as SoftwareSeni. This means they share the same work culture, have the same training (though they may be at different levels of expertise), and have access to the same resources, including dedicated HR and support. And for your business, this means you have a single point of contact to deal with for upsizing and downsizing your extended team, swapping in new skillsets and so on.
The members of a remote team won’t have this additional layer of management. You will be managing each remote team member directly with no insight into their working conditions, work habits or day-to-day productivity.
A remote team will also have to be assembled by going through the same slow hiring process as an in-house team, instead of the rapid selection process used with an extended team provider.
The extended team model has a number of advantages, some already discussed above. Speed of hiring is a big one this article keeps mentioning. Another is availability of expertise. Depending upon your location, certain skill sets might be beyond your budget or simply unavailable. Your product vision or business model might not survive these limitations.
So, being able to assemble a team with the requisite skills out of the talent pool of an extended team member provider can be the difference between success or failure.
The extended team model allows you to grow headcount without growing your footprint. All resources your extended team members need are provided by the extended team provider – computers, desks, office space.
Another key feature that makes using the extended team model so powerful is the flexibility to grow and shrink your team based on your exact needs in the moment, or to swap out extended developers for different skill sets as you move through different stages of product development.
Businesses should use the extended team model when they have a clear and detailed vision of what they want to achieve but are facing constraints across time, funding, talent or space.
That kind of covers just about every business, doesn’t it?
Prior experience in managing developers or projects and exposure to strategies for working with remote team members (almost universal now in 2023) are the two biggest requirements for using the extended team model.
If you don’t have this in-house experience you might want to take a second look at out-sourcing or hiring a software development agency like SoftwareSeni directly.
Working with an extended team is not much different from working with a mixed in-house/remote team. You face the same challenges of integrating staff into your processes and work culture, and the overhead that comes with suddenly having a higher headcount. It is not core team vs extended team, it’s core team + extended team.
As an extended team provider we have some experience in this matter. We have an article on the simple secrets to making your extended team work, and of course we are focused on helping our extended team clients succeed and are always available for guidance, support and coaching.
Right here. SoftwareSeni is Sydney-based and our main focus is offering extended team services to Australian startups and businesses that think like startups.
This focus is why our talent pool is based in Indonesia. It provides an extensive time zone overlap with Australia that we find makes working with extended development teams so much more effective, both in terms of quality of communication and responsiveness.
Our team of developers (as well as design, UX, devops, and customer service) is based in Yogyakarta. The city is a major learning centre with a large, well-established tech culture. This has allowed us to pick and choose our team members to build the deep expertise that will benefit any project.
We can provide expertise at scales from a part-time single developer up to a team of dozens and for any stage of product development, from ideation to maintenance mode.
If you’re outside of Australia and have strong remote team management capabilities, you might still find the quality and range of our tech talent worth the larger time zone difference.
So if you’re looking to increase your headcount and are searching out tech talent to deliver the outcomes your business needs, get in touch.
Agile basics for small businesses and start-upsTo understand Agile you need to understand where it came from. Building software is hard. You’re building a complex system. Bridges are complex systems, too, but they are regularly completed on time and on budget. So why does software development have a reputation unpredictable and unreliable?
The big reason is this: If you start on a bridge over a river, short of an earthquake, the other bank is still going to be there two years later when you reach it.
But in software and business, two years, even one year, is enough time for the ground to move underneath you. In two years your app might be obsolete or facing dozens of competitors. Or the underlying market has changed, your vision is outdated, and half of the software being built needs to be thrown out, another chunk needs to be updated, and your deadline is suddenly far behind the advancing market and user taste.
You end up chasing change, so busy keeping up that you never reach the finish line.
This is when Agile becomes an advantage.
Agile is not about building software as quickly as possible, it’s about releasing software as quickly as possible.
Agile is where the Minimal Viable Product (MVP) meets Lean Business practices.
Following the MVP philosophy, your product might not be feature complete for years, but when it is developed with the Agile methodology, it will be delivering value to customers within weeks. That value will increase as the Agile team continues to incorporate feedback from users, build new features, and release them to the user base on a regular schedule with a shortened cadence.
Two weeks after launch, the search bar appears. Two weeks later users have a dedicated view of their favourite items and the search bar now has autocomplete.
By structuring development to deliver a new release every few weeks the MVP increases in value to its users, but continues to be an MVP. Because priority is given to value instead of features.
To find that value, Agile relies on User Stories.
A User Story is a high level description of what an app or web site needs to do written from the perspective of a user.
The classic format for a user story is “As an X , I want to Y, so that Z“.
Here are examples of two user stories:
“As a first time home buyer, I want to keep track of houses I’m interested in, so that I can buy my dream home.”
“As a real estate investor, I want to know average house prices, so that I can spot a bargain.”
User stories aren’t technical. They are meant to be discussion points between product owners/managers and developers. Separate to the discussion the developers devise a strategy to deliver the features that support the user story within a series of sprints.
A sprint is not a period of frantic typing and programming. It’s Agile terminology for the short block of time allocated to implementing features in an app or site.
Sprints tend to be 2 or 3 weeks long. This duration is a compromise between the challenges of software development and the drive to regularly add value to the users’ experience.
If a user story cannot be fulfilled within a single sprint it becomes an Epic. Epics take multiple sprints to complete. The challenge with Epics is maintaining that continuous addition of value to the users at the end of each sprint. It can be done. It just takes planning.
“Working software over comprehensive documentation” is one of the four declarations from the original Agile Manifesto. It is not a rejection of documentation. The writers of the manifesto were all experienced programmers. They knew documentation was important.
But they created Agile at a time when the Waterfall Model was the leading methodology. In Waterfall, you plan and document everything up front. As if you were building a bridge and knew where the other side of the river is. And then your team worked methodically, step-by-step, over months and years, to implement the plan, complete the software, and release it to a market that no longer cared.
It was a methodology that sometimes saw projects spend months producing hundreds, even thousands, of pages of documentation before the first line of code was written.
Under Agile, the developers still need to know how the app is going to work, how it is meant to integrate with your business, and how your vision is to be supported.
Business rules still need to be provided, in all their detail. Wireframes need to be drawn and refined. Colours need to be specified.
Developers are in charge of implementation. You need documentation to provide the necessary details about the “what” they are implementing so they can focus on the “how”. It is one of the keys to being Agile.
Don’t make the mistake of looking at Agile and thinking it is a way of building software faster than normal. Agile brings responsiveness, not acceleration of software development.
That responsiveness requires experienced developers. Developers not just with a certification in Agile, but also years of experience with the technology and in delivering working software. It took years for the developers at SoftwareSeni to reach the level where they can deliver consistently. Learning Agile was a tiny part of that skill.
Responsiveness might also be what rules Agile out for you. If you’re not building software for a new market, or a market that is continually changing, traditional development models may work better and have a tighter fit with your company’s culture.
Your company’s culture might also be the thing that makes Agile the wrong solution for you. A top-down management style where decisions have to pass through multiple levels of stakeholders for sign-off is the opposite of Agile. Agile is about trusting the developers, trusting the conversations with product managers, and allowing a team to execute with minimal interruption.
If you’re already crafting the email chain in your head for feature sign-off, trying to implement an Agile process will be an exercise in frustration.
This article has given you the concepts you need to know when considering adopting an Agile methodology yourself. You hopefully have an insight into whether or not you can (or should) take advantage of it.
If you’re outsourcing your development, an Agile partner like SoftwareSeni can help you decide if Agile is appropriate. We’ll be happy to work with you, help integrate you into your own Agile team of developers, and apply this very effective development strategy to your competitive advantage. Get in touch.