Why Didn't You Test That?

Curiosity Software

The Curiosity Software Podcast featuring Huw Price and Richard Jordan! Together, they will share their insight and expertise in driving software design and development in test. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that?

So, here at Curiosity Software 📻 we're super excited to announce our brand new podcast, Why Didn't You Test That?, featuring Huw Price and Richard Jordan! 📻Watch and listen to the first episode here.

Additional Info: Check the LinkedIn launch post. Why Didn't You Test That? Available wherever you listen to your podcasts: Why Didn't You Test that?

















read less
TechnologyTechnology

Episodes

Episode 18: Organizational Challenges in Defining Quality
Apr 29 2024
Episode 18: Organizational Challenges in Defining Quality
Welcome to episode 18 of the Why Didn’t You Test That? podcast. In this episode, Curiosity Software's Rich Jordan and CEO James Walker are joined by Chris Harbert, an industry executive and host of the Developers Who Test but also founder and CEO of Testery. Together they discuss the organizational challenges in defining quality.So, quality is essential in software delivery. But, who actually owns quality? if that’s a reasonable question to ask. Collaboration between developers and testers is crucial for achieving quality and a mutual respect and involvement in bug fixes can bridge the gap between the two roles and improve overall product quality. But can this lead to the ‘bystander effect’? in which no one seems to be responsible for quality.Increasingly due to legacy systems organizations need to address their technical debt and complexity to improve quality, so saving a clear plan, architectural overview, and leveraging test automation can help untangle legacy systems and pave the way for better quality practices. This can enhance testability and reduce toil during sprints.And what of metrics which play a crucial role in measuring quality? Key metrics include bug discovery rates, test coverage, customer satisfaction scores, and support team efforts. These metrics provide insights into the effectiveness of quality initiatives and highlight areas for improvement.Finally, as with any episode of Why Didn’t You Test That? we consider AIs impact your testing effort. Generative AI and non-deterministic behaviour may complicate testing, so requiring skilled testers is paramount to ensure you’re leveraging in line with organizational quality objectives in meeting customer expectations, but also providing a good user experience.The Curiosity Software Podcast featuring Rich Jordan, Huw Price, James Walker and collegues! Get insight and expertise into what’s driving software design and development. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? Spotify | YouTube | iTunes | Google Podcasts | Amazon Music | Deezer | RSS Feed
Episode 17: Making Quality Requirements with Colin Hammond
Apr 17 2024
Episode 17: Making Quality Requirements with Colin Hammond
Welcome to episode 17 of the Why Didn’t You Test That? podcast. In this episode, Curiosity Software's Rich Jordan and CEO James Walker are joined by Colin Hammond, CEO of ScopeMaster who discusses the importance of how requirements analysis and sizing lends well to predicting project schedules and assessment of scope in software development.The neglect of skills and training in requirements elicitation and documentation leads to poor requirements quality and project issues. Hammond emphasises how sizing software using functional points allows for accurate estimation, better resource planning, and early identification of project issues. This early detection is crucial avoid budget overruns or ultimately to prevent a project from failing. Agile development should not neglect the importance of high-level requirements and architecture to avoid costly changes later. Functional sizing provides a reliable predictor of effort, aiding in project estimation and scope management. AI advancements can automate requirements analysis, generate test scenarios, and offer suggestions, but human supervision and context are essential. Organizations are starting to challenge the ownership of quality, recognizing the need for a holistic approach beyond testing. The shift towards a quality engineering role and new quality-focused positions shows an increasing awareness of requirements and quality in software development.The Curiosity Software Podcast featuring Rich Jordan, Huw Price, James Walker and collegues! Get insight and expertise into what’s driving software design and development. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? Spotify | YouTube | iTunes | Google Podcasts | Amazon Music | Deezer | RSS Feed
Episode 16: Tips for public speaking & presenting
Apr 3 2024
Episode 16: Tips for public speaking & presenting
Welcome to episode 16 of the Why Didn’t You Test That? Podcast! In this episode, the Curiosity team, Rich Jordan, Ben Riley and Lina Deatherage, discuss their latest webinar and share tips for public speaking and preparing for live presentations.The hosts provide insights, personal experiences, and practical advice, making for an informative and enjoyable listen. In this conversation, the teams get to debating the merits of a hot dog being a sandwich. But beyond such pop culture controversies, there’s also insight and tips about public speaking and a focus on how testers are seeking out more formalised test data education.Keep calm and carry on is the tip from Curiosity’s Ben Riley, who shares his enjoyment of chaotic situations, recalling instances where he faced unforeseen technical issues. Tips being to of course remain calm, but also think on your feet, and adapt during any public speaking engagements.During a recent webinar, Lina shared her experience of presenting a software demo focused on test data strategies. She touches on the importance of preparation, finding a comfortable pace, and being adaptable in case of any unexpected issues or challenges. Lina also emphasized the importance of realizing that the audience is there to learn and support the speaker, rather than finding faults or errors.During the webinar, the audience expressed interest in the new certificated Test Data Fundamentals Course.  Ensure data isn’t a blocker to improving your quality efforts today, by completing Curiosity's Test Data Fundamentals & Key Questions course. In this certified course, you’ll explore key questions that you should be able to answer when setting up your test data capability and as part of a good Test Data Strategy.Check out Ben's and Lina's recent webinar, Perfect Your Test Data Strategy: How to Achieve Software Quality and Compliance at Speed. Watch now to learn how you can deliver complete and compliant data on demand, developing quality software at speed.The Curiosity Software Podcast featuring Huw Price and Rich Jordan and the Curiosity team! Together, they share their insight and expertise in driving software design and development in test. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? Spotify | YouTube | iTunes | Google Podcasts | Amazon Music | Deezer | RSS Feed
Episode 15: The Future of AI Co-Pilots with Mark Winteringham
Mar 19 2024
Episode 15: The Future of AI Co-Pilots with Mark Winteringham
Welcome to episode 15 of the Why Didn’t You Test That? Podcast! In this episode, Curiosity Software's Rich Jordan and James Walker, are joined by Mark Winteringham, author of AI-Assisted Testing. Together they reflect on their experience with AI, the effect it has had on software quality and testing, and the future of AI Co-Pilots. LLMs, namely ChatGPT, Gemini and Llama are cool, but what do they offer in terms of delivering software quality? What leaps have you taken in using generative AI technology? How will you future-proof your AI-Assisted testing efforts? By now you really should be considering these questions at a strategic organisational level. Guest Mark Winteringham unravels a collage of challenges as he reflects on his new book "AI Assisted Testing” with our hosts. Providing a balanced perspective in understanding the progress, plateaus, and benefits of using artificial intelligence and co-pilots for delivering quality software.James follows up by exploring the value of AI Co-Pilots in testing and the importance of context in prompt engineering, emphasising the need for experimentation to determine what actually makes a good prompt. Seen with a healthy scepticism, prompts can be used as aids to extend quality testing abilities. But to yield better results, rather than prompting AI with a broad question, the advice is to target specific parts of the system or problem. But what does implementing AI technology in to your SDLC actually mean, and how does it work? The possibilities seem endless, and large language model’s keep growing, but has there been an impact, is true transformational change still a while away?Use Curiosity's code at checkout for a discount on Mark Winteringham's book, AI-Assisted Testing!Get the Book Here: https://bit.ly/ai-testingUse this code: podcuriosity24 The Curiosity Software Podcast featuring Huw Price and Rich Jordan and the Curiosity team! Together, they share their insight and expertise in driving software design and development in test. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? Spotify | YouTube | iTunes | Google Podcasts | Amazon Music | Deezer | RSS Feed
Episode 14: AI-Powered Testing Practices with Alex Martins
Mar 6 2024
Episode 14: AI-Powered Testing Practices with Alex Martins
Welcome to episode 14 of the Why Didn’t You Test That? Podcast! In this episode, the Curiosity Software team, Rich Jordan and Ben Johnson-Ward, are joined by Alex Martins, VP of Strategy at Katalon, to discuss the implications and challenges of AI-Powered Testing. This episode goes beyond the hype and marketing euphoria of AI, to weigh up productivity gains coming from GPT-4 and large language models (LLM) in the software quality space. Guest Alex Martins leads the conversation around the need to put the tester at the centre of AI-powered testing, and only then, start building out AI use cases and safeguards. Where the development community has seen tangible gains in AI deployment, the uplift in AI-powered testing practices is just beginning. So, how will this impact software testing professionals? Also, how will SME knowledge evolve as organizations develop bespoke LLMs?Ben Johnson-Ward argues, if artificial intelligence is used to create test outputs, then testers will have to evaluate the output of these tests to determine if they are correct. This approach may lead to a decrease in productivity as testers spend time testing the output of AI generated tests. Testers will be able to fine-tune their AI models and build out a broader toolkit. But what does this look like? While organizations are adopting AI in testing, there will also be impact on the metrics of repeatability, explainability, and auditability. With this in mind, internal AI committees can establish rules to abate uncertainty. Rich Jordan follows up on Ben's point, explaining how from the human perspective, AI may be limited in determining if an application meets the needs of the users. In this use case, AI becomes the co-pilot, a new tool for experts to enhance collaboration, while testers remain primary-pilots. Repeatability is discussed as a characteristic that humans are comfortable with in testing, but can AI offer better alternatives to traditional methods of monitoring code changes and integration flows? AI-powered practices in software testing and test coverage are still in their early stages. This requires ongoing collaboration, learning, and sharing of experiences among organizations and industry professionals. Finally, the possibilities and potential benefits of AI are too significant to ignore, despite the discomfort and challenges it brings in delivering quality software, faster. The Curiosity Software Podcast featuring Huw Price and Rich Jordan and the Curiosity team! Together, they share their insight and expertise to learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? Spotify | YouTube | iTunes | Google Podcasts | Amazon Music | Deezer | RSS Feed
Episode 13: Learning from Software Failures | Why Didn't You Test That?
Feb 20 2024
Episode 13: Learning from Software Failures | Why Didn't You Test That?
Welcome to episode 13 of the Why Didn’t You Test That? Podcast! In this episode, the from the Curiosity Software team, Ben Riley, Rich Jordan and Paul Wright, discuss their learning, experimentations and experiences with failures. This episode of Why Didn't You Test That? Emphasises the value of experimentation and learning from failure, and why it's key for organizations trying to foster innovation and continuous growth. Highlighting the importance of creating a culture of psychological safety where individuals feel comfortable making mistakes, and embracing failures as opportunities to learn and improve.Paul Wright recalled a failure he experienced in a previous role, relating it to a lack of communication and alignment within an organization. The failure emphasised the importance of understanding how a new idea or initiative fits into the larger business strategy. Effective communication and alignment between departments can prevent internal competition and ensure that efforts are coordinated towards a common goal. The podcast also covers the challenge of software design for higher education institutions. Due to resource constraints, these institutions often struggle to engage in early design phases and shift left in the testing process. However, there is a growing recognition of the benefits of early involvement to customize solutions and ensure better alignment with specific needs. This highlights the importance of finding ways to overcome resource limitations and actively participate in software design. Seek out and watch/listen to the complete episode 12 to learn more!The Curiosity Software Podcast featuring Huw Price and Rich Jordan! Together, they share their insight and expertise in driving software design and development in test. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? Spotify | YouTube | Google Podcasts | Amazon Music | Deezer | RSS Feed
Episode 12: The Legacy System Conundrum with Nalin Parbhu
Jan 30 2024
Episode 12: The Legacy System Conundrum with Nalin Parbhu
Welcome to episode 12 of the Why Didn’t You Test That? Podcast, hosted by Curiosity Software's Huw Price and Rich Jordan! In this episode, our hosts are joined by Nalin Parbhu, Founder & CEO at Infuse, together they discuss digital transformation challenges for organisations struggling with legacy systems, limited budgets, and a reactive approach to change. Listen to this episode of the Why Didn't You Test That? Podcast to learn from Nalin Parbhu's experience working with higher education institutions, which face unique challenges due to the complex nature of their systems and integrations. Additionally, discover why digital transformation and cloud adoption are driven by the need for scalability, flexibility, and improved user experiences. The use of outdated on-premise systems and customization of off-the-shelf solutions has introduced maintenance and upgrade difficulties. Budget constraints and competing priorities impact IT investments, while limited operational budgets often result in a focus on capital expenditure, leading to delayed or constrained IT initiatives. Huw Price describes how testing and quality assurance play a crucial role in successful software implementation. However, the lack of standardized processes and the use of point-to-point integrations and legacy systems has led to increased complexity and higher maintenance costs. Nalin Parbhu concludes that organisations are finally starting to recognize the importance of automated testing and the need for disciplined test environments and data management. Rich Jordan adds that there's a need for clear requirements, well-defined acceptance criteria, and accountability to ensure successful partnerships and quality. The Curiosity Software Podcast featuring Huw Price and Rich Jordan! Together, they share their insight and expertise in driving software design and development in test. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? Spotify | YouTube | Google Podcasts | Amazon Music | Deezer | RSS Feed
Episode 11: Understanding Value Streams with Chris Dutta
Dec 1 2023
Episode 11: Understanding Value Streams with Chris Dutta
In this episode with Chris Dutta co-founder and head of consulting at Dragonfly whose focus is on quality engineering, Curiosity Software hosts Huw Price and Rich Jordan explore the exponential benefits of implemening value stream management (VSM). Dutta succinctly describes VSM as a set of activities improving the end-to-end flow, quality and the value that IT provides to broader business objectives, typically through software delivery!Dragonfly’s product Neuro extracts and understands such activities surfacing them as metrics into a dashboard for VSM discovery. Through surfacing insight from commits and repos relative to active or blocked ticket states this impacts on lead time from development to deployment in multiple Jira instances. An outcome is reducing waste in the SDLC. But how insightful certain tickets are depends on a secondary layer of complexity metrics.While the purpose is to bring business value to the end user, the benefit of a value stream is not immediately evident. The main question is then how to leverage the metrics but also to avoid gaming them? It’s known that measurements are insightful, but only if used to inform what the business wants. At worst, a siloed value stream pitches measures as weapons which can deflect from continual improvement in flow.Inevitably, forces existing in large organisations naturally lend themselves to waterfall-esque scenarios. So, in practice there’s many contradictions for product teams to navigate, either from top-down C-Suite colleagues and bottom up from a practitioner perspective. Ultimately, how do you compose an agile certified product team, to distribute knowledge and skillset to enable the value IT provides business?Project versus product sits at the heart of much of the conversation which tools aside, considers an organisation’s culture in terms of its measure of deployment frequency, lead time, change-failure rates, and time to recover against any production failure? These are recognised as Dora metrics which boosted with complexity metrics such as measuring the testability of a system lead to growth and ease of refactoring of a product release.The Curiosity Software Podcast featuring Huw Price and Rich Jordan! Together, they share their insight and expertise in driving software design and development in test. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? Spotify | YouTube | Google Podcasts | Amazon Music | Deezer | RSS Feed
Episode 10: Improving Enterprise Wide Collaboration with Chris Rowett
Nov 22 2023
Episode 10: Improving Enterprise Wide Collaboration with Chris Rowett
In this follow up episode with DevOps and Digital Transformation leader Chris Rowett, Curiosity Software hosts Huw Price and Rich Jordan touch on Dora metrics. Used in isolation these benchmarks which include deployment frequency, lead time for change, recovery and change failure, can be seen to measure business performance. Another issue is that the complexity of a system can't be understood by deployment of accelerators and tooling alone. Transformational change at an organisational level needs finance, attention, and mindshare from top-down sponsorship.Hosts Rich and Huw with guest Chris Rowett draw on their enterprise level experience. This is as developers, product owners but also quality leads. They give tips for incentivising inter and cross team communication to engender trust and garner sponsorship for organisational change. On leveraging the value of metrics, trust between teams is key to unlocking business momentum. Join the conversation to hear how this is achieved through incentivising the practitioner teams and their C level colleagues to communicate and collaborate for the goal of better software delivery.Software development and delivery rely on various teams' trust that code is deployed correctly, so communication is paramount. Be it between internal or 3rd party teams, this leads to a joint end-to-end ownership of a problem, at which point redundancy and technical debt in a system is better understood. Also, the barriers to collaboration get lowered whilst boosting insight and understanding. As a gateway to a smarter use of resources this translates to efficiency gains where pipelines can be unblocked and duplicated tests and effort is reduced.At the flux where teams collaborate, modelling and tooling then helps articulate communication beyond its practical application. Seen as accelerators in software delivery, it’s crucial to initially go slower to identify leaks and blockers in a system under test. With some foresight these can be implemented as part of a value stream to limit the blast radius between parts of the pipeline. This dividend pays off technical debt and improves oversight of DevOps frameworks giving insight and success that’s repeatable for other teams and practitioners to pick up towards the business goal of better product output.The Curiosity Software Podcast featuring Huw Price and Rich Jordan! Together, they share their insight and expertise in driving software design and development in test. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? Spotify | YouTube | Google Podcasts | Amazon Music | Deezer | RSS Feed
Episode 9: Transforming Large Organisations with Chris Rowett
Nov 8 2023
Episode 9: Transforming Large Organisations with Chris Rowett
Welcome to episode 9 of the Why Didn’t You Test That? Podcast, hosted by Curiosity Software's Huw Price and Rich Jordan! In this episode, our hosts are joined by Chris Rowett, DevOps & Digital Transformation Leader, together they explore the challenges associated with bringing change to large organisations!Organisational change is painful and disruptive, it can change roles and cause concern amongst teams, so in terms of value and results, a good and compelling reason is needed to entice change. In conversation with DevOps and Digital Transformation leader Chris Rowett, Curiosity Software’s Huw Price and Rich Jordan look at factors around success metrics, and how they impact quality processes in an organisation’s software release cycle. In terms of building momentum for transformation, how do you get 60% of the organisation aligned for that push towards crossing the chasm?Test transformations that are delivered top down, translate to increased automation and cost-cutting, but how you get there? That is the challenge. Transformation should come through understanding coverage, stability, repeatability, this will help your organisation make better business decisions and encourage change.The challenge then is to update metrics to enable organisational agility. Huw Price calls this ‘quantifiable testability’, so measuring how easy something is to test, and not code quality itself. For Rich Jordan, the concern for updating metrics is that, if over a 15-20 year span within the industry only around 15-20% of tests are automated, we’re missing something. Data on failure should in fact educate the business in understanding how to handle complexity.Is your organisational design sufficiently enabling business owners and IT to communicate? For instance, we need better attention on aligning the business service with the mechanisms in IT to enable netter communication. This translates to organisational refactoring, which in turn pays off technical debt. Otherwise, against your competitors, either through release of a poor product, or risk of regulatory fines, through slow compliance, you risk reputational and monetary damage.The Curiosity Software Podcast featuring Huw Price and Rich Jordan! Together, they share their insight and expertise in driving software design and development in test. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that?The Curiosity Software Podcast featuring Huw Price and Rich Jordan! Together, they share their insight and expertise in driving software design and development in test. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? Spotify | YouTube | Google Podcasts | Amazon Music | Deezer | RSS Feed
Episode 8: The Liberated Tester’s Mindset with Gunesh Patil
Sep 27 2023
Episode 8: The Liberated Tester’s Mindset with Gunesh Patil
Welcome to episode 8 of the Why Didn’t You Test That? Podcast, hosted by by Curiosity Software's Huw Price and Rich Jordan! In this follow-up episode with Gunesh Patil, The Liberated Tester, Rich and Gunesh share their insights into what it means to be a liberated tester and how they approached the adoption of agile methodologies!Rich and Gunesh propose the idea of using squads, working to a blank canvas, can obtain a better level of DevOps and agile ways of working. However, a lack of focus on spinning up teams to do integration testing ahead of bulky end-to-end testing, may lead organisations into a blinkered cargo cult way of working. The cargo cult way of working adds difficulty to sustaining a bi-weekly change cadence, by injected chaos in the system with every incremental change.In terms of toolsets and architecture, a blank canvas helps squads see what’s not working and thus the challenges to face into. Gunesh suggests that emphasis needs to be on what’s possible to build rather than  on the toolsets alone for avoiding sprints defaulting to mini waterfalls. This holistic approach enables stakeholders are to see and communicate requirements but also data dependencies.At a high level, this boosts the success rate of a test strategy, as many angles will have already been considered, providing an outcome of good documentation rather than a reliance on head knowledge. In part, model-based testing gets teams making use of this documentation for initial component level testing. This leads to heightened collaboration between stakeholders.With model-based testing, stakeholders are given the power to easily execute and comment on a system’s architecture at various stages, but also on the technical aspects of components. Isolation and integration can then be explored more freely with bulky end-to-end tests. This leads to a massive gain in time, quality and DevOps speed, and in turn, experimentation and feedback become standard and shared practice between test teams and stakeholders. The Curiosity Software Podcast featuring Huw Price and Rich Jordan! Together, they share their insight and expertise in driving software design and development in test. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? Spotify | YouTube | Google Podcasts | Amazon Music | Deezer | RSS Feed
Episode 7: The Model-Based Tester’s Journey with Gunesh Patil
Sep 14 2023
Episode 7: The Model-Based Tester’s Journey with Gunesh Patil
Why Didn’t You Test That? guest Gunesh Patil shares insights on his journey beyond the misunderstandings and misconceptions in Model-Based Testing alongside Curiosity Software's Rich Jordan! Rich and Gunesh previously worked together on major SI projects managing transformational change of a disparate systems in a medium-sized organisation. They championed this as Data Automation and Virtual Environments - so, DAVe Ops for their own version release and change management. DAVe Ops helped spotlight how a shared understanding of a system’s architecture, or lack of, affects good software testing. Circa 2010, Rich illustrates this as an anti-pattern where automation teams were stepping in to help test teams run test cases in bulky end-to-ends. This was in response to automation test cases failing. A fail isn’t due to the automation but more to a lack of shared understanding of the consumable breakpoints in system’s architecture. For stakeholders with short-term sights on improved automation, this omits the benefits of the ‘how you get there' approach of Model-Based Testing. It’ll deal less with blackbox, instead observes sustainable metrics such as risk, response times, payloads, ie impact analysis. For Gunesh this visual and flow based production of reusable components is actually a driving force in efficiency. The need for a siloed back-and-forth of translating business requirements into test cases gets reduced. An operational win for service isolation and test matching.Sketching a practical middleware/automation test strategy comes only by listening to the expectations of designers, developers but also seasoned ancillary actors in the CICD pipeline. This ensures constraints and breakpoints are identified, and which anticipates and avoids introducing accidental complexity in a SUT. The outcome is costly and time consuming data overlaps in automation are avoided.Operationally, test matching, along with getting and allocating data formalises thinking whilst paying down technical debt. The main takeaway is that collective analysis makes software testing more integrated across teams, giving opportunity to create a strategy factoring in isolation breakpoints. So, don’t just do, also pose questions to tackle organisational but also technical inconsistency and intractability.The Curiosity Software Podcast featuring Huw Price and Rich Jordan! Together, they share their insight and expertise in driving software design and development in test. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? Spotify | YouTube | Google Podcasts | Amazon Music | Deezer | RSS Feed
Episode 6: The Impact of Virtualisation in Testing
Aug 1 2023
Episode 6: The Impact of Virtualisation in Testing
Is virtualisation given its rightful place in test design? Deployed in a loosely coupled system, speed and flow increase, whilst reducing technical debt. Team alignment and version handling is improved, setting the terrain for better software delivery. Welcome to this episode of the podcast Why Didn’t You Test That?Moving from expertise in original service virtualisation to sandboxing, guest John Power, CEO of Ostia Solutions, with Curiosity Software's Huw Price and Rich Jordan shares insight that a proper sandbox is fully simulated, gives a good customer experience to developers and testers, that a sandbox being standalone and generating synthetic data, it isn’t compromised.Initially offering a proxy, using request-response algorithms for recording and replaying without Mainframes in play, Ostia went to providing full simulation by example of the UK's Open Banking Model. This involved moving the technology from simply record and replay of data to actual data generation.The hosts share experiences on leading a virtualisation team but also how best to implement Master Data Management using sandboxes in model-based testing to avoid accidental complexity in the system under test. In adopting such an approach, the starting point really is to understand the current confidence level in the interface you’re asking service virtualisation to replace.In practice, simulating what currently exists a system, through the framework bringing in functional endpoints and business rules, it informs the required APIs to the benefit of time, security and quality. But the challenge is for organisations to value sandboxes in adjusting the system design rather than as a regulatory or end-of-year afterthought. Beyond creating reusable assets, you’ll ensure continuous updates to sandbox data and testing models.The approach also gives oversight to which contracts and test environments are affected, alongside sandboxes. Though this requires moving away from a centralised management of APIs. In working towards a better architectural design of a system, where dependencies are isolated, we can learn from Conway’s Law. It suggests a system mimics the organisation's communication, so it's best to improve communication across teams first. Sandboxing will then thrive at an organisational level. You’ll be reducing technical debt, risk, extra effort and in parallel developing mature teams to enable flow, feedback and experimentation in the system under test. The Curiosity Software Podcast featuring Huw Price and Rich Jordan! Together, they share their insight and expertise in driving software design and development in test. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? Spotify | YouTube | Google Podcasts | Amazon Music | Deezer | RSS Feed and iTunes.
Episode 5: Critical Thinking in Test Design
Jul 6 2023
Episode 5: Critical Thinking in Test Design
In this episode, Paul Gerrard, award-winning software engineering consultant, author and coach, brings his experience to the Why Didn't You Test That? Podcast! Together with Curiosity Software's hosts Huw Price and Rich Jordan, Paul Gerrard discusses the difficulties of organisational hierarchies and how to identify ‘stakeholders’ as your ‘internal testing customers’. Co-host Rich shares his experience of looking over a test data team, unpacking one of his favoured mantras of ‘go slower to go faster', doing so as a caution to not just relying on your technology capability, which might be brilliant, but did it solve any problem?In practice this requires analysing the problem as a means of arriving at consensus, and not in any way the much derided ‘analysis paralysis’. Consensus should be applauded as it allows teams to get agile “with a small 'a” and beyond solely considering technology capabilities. Guest Paul then focuses our attention on the value brought to the software delivery by testers, in that their insight and analysis brings us closer to the problem, beyond any supposed solution. With that in mind co-host Huw informs us on the primary of role of testers, and the need to align their purpose as being critical thinkers within quality assurance. He caveats this to impress it’s more about open than it is dogmatic dialogue. An open dialogue helps plug any blind spots stakeholders may have and leads to demonstrable improvements and time savings across a system and/or processes. Though, these beneficial outcomes of eased communication between stakeholder and tester and vice-versa happens more fluidly in smaller teams. And that’s where risk workshops can help determine business goals and avoid projects shutting down in flames within larger organisations. These workshops act as a tool for triggering input from broader stakeholders within an organisation’s hierarchy. From this can evolve a crossover of perceptions for deciding and prioritising critical outcomes of any software delivery process. Co-host Rich cautions us though, in that testers need to avoid jumping quickly to answering issues through the language of risk and resiliency which usually lead to ‘functional or performance' testing alone. Instead he promotes the need to foresee and map the requirements that need to be proven to ‘accountable’ stakeholders beforehand.So too there’s a raft of interpersonal skills including imagination and critical thinking from testers which can be priceless in early stage challenging to plug gaps in requirements. The outcome of which gives a firmer grasp and understanding how software should function. The Curiosity Software Podcast featuring Huw Price and Rich Jordan! Together, they share their insight and expertise in driving software design and development in test. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? Spotify | YouTube | Google Podcasts | Amazon Music | Deezer | RSS Feed
Episode 4: Aligning Testing to Quality with Marcus Merrell
Jun 7 2023
Episode 4: Aligning Testing to Quality with Marcus Merrell
In this episode, Marcus Merrell, Vice President of Technology at Sauce Labs, bring his experience to the Why Didn't You Test That? Podcast! Together with Curiosity Software's hosts Huw Price and Rich Jordan, Marcus Merrell discusses the lack of innovation in the testing space over the last 20 years, and why testing should be at the centre of quality efforts. Marcus rejects the idea that testing is a janitorial effort and cost centre, seeing it more as a revenue protection and growth opportunity if utilised properly!  The discussion gets started with a few questions, why is innovation limited to digital testing and model-based testing? How do we get beyond the dogmas and fashions around favouring tooling over business risk? The takeaway is that focusing just on the common trappings of economically expensive bugs, even with 99% code coverage, it pulls focus away from reflecting on real business risks. This takes teams beyond the "must deliver yesterday" culture. Software practices require testers to have a seat at the executive level to inform the company about the kinds of risks they’re exposed to, but there’s a rabbit hole syndrome that has freed up a misconception that testers are irrelevant.The way to combat this is through finding early adopters to incubate, diffuse and prove culture change is an option. In terms of organisational risks, it’s great having a testing capability and team, yet unless somebody’s feeling pain when something blows up, how likely is it they will react? And at a more granular level the conversation needs to be on persuading organisations to spend time thinking about requirements, and quality at the start of the software delivery lifecycle. The Curiosity Software Podcast featuring Huw Price and Rich Jordan! Together, they share their insight and expertise in driving software design and development in test. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? Spotify | YouTube | Google Podcasts | Amazon Music | Deezer | RSS Feed
Episode 3: Bad Data in Software Delivery! 
May 18 2023
Episode 3: Bad Data in Software Delivery!
Bad data but also production data. Curiosity Software's Huw Price and Rich Jordan move the conversation on from GDPR compliance to bad data in software delivery. Together they explore the concept of the data gambler and sceptic, and the tension of using commercially live data in software delivery. Asking, is data more than a side gig for organisations? And ultimately, why it’s all about data, and why without test data there is no testing? Episode 3 of Why Didn’t You Test That? Bad Data in Software Delivery.Can we change the narrative? Implement Design through Data Security to shore up predictability and repetition, which can reduce disorder in a system and set the bedrock for dynamic automation. And then what actually defines synthetic data? Responding to this, we touch on using API over unit testing and the role of a data generation AI to move away from the buzzy gold copy database. What can you do? Design to do critical changes only once, then disguard them, reducing the amount of logic gates with each test, so reducing test bloat. Just spin up just the right amount of Dynamic Automation,  ensuring to be critical around test cases, starting with manual, then go towards negative testing, for instance coping with nulls.Finally, realising the cost to the business? To what extent is the delivery team that works like a feature factory aligned or detached with the audit team verifying by checklist? Maybe start weighing in on this as a compliance versus conduct risk situation in the organisation. The need then is to align the synergies between what’s being asked about the customer profile and how the profile of the customer is actually understood.Ultimately you’ll be upping the bar on both functional (one-to-one testing of cases and data) and performance (what would, or even should production do) testing. GDPR guidance is there to be interpretted by the individual organisation, which is missing right now, and so at the molent is this just a type of control theatre mitigating a real lack of understanding of the unknowns in a systyem under test?The Curiosity Software Podcast featuring Huw Price and Rich Jordan! Together, they share their insight and expertise in driving software design and development in test. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that?The Curiosity Software Podcast featuring Huw Price and Rich Jordan! Together, they share their insight and expertise in driving software design and development in test. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? Spotify | YouTube | Google Podcasts | Amazon Music | Deezer | RSS Feed
Episode 2: What GDPR Means For Software Delivery
Apr 26 2023
Episode 2: What GDPR Means For Software Delivery
GDPR (general data protection regulation) and what it means for software testing and delivery. Guest Tom Pryce, Communications Manager, Curiosity Software, brings his knowledge to ep2 of Why Didn't You Test That? What GDPR Means For Software Delivery.We’re exploring plenty of ideas including: was the grace period of 2 years prior to GDPR legislation a call to arms for organisations to shore up clarity in their IT estates? Who are the gamblers and sceptics? What have been the implications for data provisioning regarding data minimisation and process limitation, particularly for software testing and delivery? From this to also considering what if you reduced the attack level through using synthetic data in the lower environments? What’s the difference between the data subject and the data processor related on premise? Also, how does this all impacts having access to PII protected characteristics in data provisioned to non-production environments. And finally where should you be paring back on the use of live data to know the flows between systems for assisting in migration You’ll also hear about how the fine caused by a cloud migration that the Norwegian DPA (data protection authority) said could have been avoided being levied by way of using synthetic data and less production data.The Curiosity Software Podcast featuring Huw Price and Rich Jordan! Together, they share their insight and expertise in driving software design and development in test. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that? > Spotify | YouTube | Google Podcasts | Amazon Music | Deezer | RSS Feed