Gaming.Zone Posted Saturday at 11:29 AM Posted Saturday at 11:29 AM Hey everyone, I wanted to start an open discussion about the quality assurance (QA) process for LaunchBox. I’ve noticed an increase in stability issues and regressions over recent versions, and I’m curious what QA practices the team currently follows to maintain product quality. A few specific questions come to mind: Is there a dedicated QA team, or is testing handled primarily by developers and community feedback? Is manual testing performed on each release, and if so, what areas are covered? Are new builds going through regression testing to validate existing functionality? Is there any level of unit testing or automated testing being done? Do you use code reviews or paired development practices before merging changes? How do you determine which environments to test on — do you vary PC specs, OS versions, or test both fresh installs and existing builds? When new features are added, is there a risk analysis done to understand their potential impact on existing functionality? From my experience working in software QA across multiple companies, the recent patterns I’m seeing suggest that the QA process for LaunchBox may be slipping or possibly very limited. Even with a small team or rapid release cycle, there are ways to keep strong QA practices in place — from lightweight test plans to structured regression passes. Community feedback is valuable, but it’s not a substitute for coordinated QA efforts that work hand-in-hand with development. Having QA involved earlier helps catch issues before they reach production and can dramatically improve stability. Another thing I’m curious about — has there been significant refactoring in recent builds? If so, that’s another strong reason to have thorough regression and comparison testing to ensure nothing breaks during those changes. Would love to hear from the LaunchBox team (and other users) about what testing practices are currently in place, and whether there’s room for improvement or community collaboration to strengthen the QA process. Quote
dragon57 Posted Sunday at 02:18 AM Posted Sunday at 02:18 AM I also come from an IT related team with a QA background. I made a personal decision to stay with an old build because recent updated releases of Launchbox had a number of issues with my installations. I have been too busy to provide feedback, but after searching the beta threads, the issues I saw were already reported. Your points are quite valid. I hope someone from the core team responds. 1 Quote
faeran Posted Monday at 07:59 PM Posted Monday at 07:59 PM On 9/27/2025 at 7:29 AM, Gaming.Zone said: Hey everyone, I wanted to start an open discussion about the quality assurance (QA) process for LaunchBox. I’ve noticed an increase in stability issues and regressions over recent versions, and I’m curious what QA practices the team currently follows to maintain product quality. A few specific questions come to mind: Is there a dedicated QA team, or is testing handled primarily by developers and community feedback? Is manual testing performed on each release, and if so, what areas are covered? Are new builds going through regression testing to validate existing functionality? Is there any level of unit testing or automated testing being done? Do you use code reviews or paired development practices before merging changes? How do you determine which environments to test on — do you vary PC specs, OS versions, or test both fresh installs and existing builds? When new features are added, is there a risk analysis done to understand their potential impact on existing functionality? From my experience working in software QA across multiple companies, the recent patterns I’m seeing suggest that the QA process for LaunchBox may be slipping or possibly very limited. Even with a small team or rapid release cycle, there are ways to keep strong QA practices in place — from lightweight test plans to structured regression passes. Community feedback is valuable, but it’s not a substitute for coordinated QA efforts that work hand-in-hand with development. Having QA involved earlier helps catch issues before they reach production and can dramatically improve stability. Another thing I’m curious about — has there been significant refactoring in recent builds? If so, that’s another strong reason to have thorough regression and comparison testing to ensure nothing breaks during those changes. Would love to hear from the LaunchBox team (and other users) about what testing practices are currently in place, and whether there’s room for improvement or community collaboration to strengthen the QA process. Thanks for taking the time to share all this. You clearly have some experience in this field so we can skip past the 'bugs and regressions are frustrating' bit and jump straight to - we agree. Our QA process has evolved over the last 4 to 5 years and, if anything, is stronger and more capable than it used to be. LaunchBox is a very open ended app and even just looking at a single feature, like game import, we have around 7 different import methods, each with many options, you are easily looking at 150+ different combinations, not to mention the fun that is OS and devices and the gazillion different ways people have their collections set up we really do try our best to hit the likely problem areas. We, admittedly, have been caught by a few larger issues that have really pushed us hard this year. The conflicts of video playback and platform changes (OS, .NET, and hardware). Since you're asking, we can easily share our QA process We test everything manually, in many different configurations (including new builds and existing builds) on many different pieces of hardware, including many computers, controllers, monitor setups, other popular toys (my messy desk speaks for itself), and many third party programs, tools, plugins, and themes that many users love to use in their setup (this always takes longer than the dev cycle by quite a far margin) We do structured/informed regression testing We do not have the pleasure of much automated testing for a variety of good (and some less good) reasons, so manual testing is where we get most value Every feature is released through the beta channel excepting those things we put into immediate hotfix from a recent public release Manual testing + a beta will never give us 100% confidence, nor can it. But it's the same but improved process every cycle we go through, with all the learnings we build. Bugs will definitely slip through and we'll definitely try to learn from those that do. The community, as it always has been, is invaluable to our process and we couldn't do this without you and we appreciate beyond measure this piece of it. Your thoughtful post definitely is a part of this. P.S. help us beta test if you don't already. A new beta will come out within the coming week(s) which you will find the thread here: https://forums.launchbox-app.com/forum/83-beta-testing/ 2 Quote
Gaming.Zone Posted Monday at 08:32 PM Author Posted Monday at 08:32 PM 33 minutes ago, faeran said: Thanks for taking the time to share all this. You clearly have some experience in this field so we can skip past the 'bugs and regressions are frustrating' bit and jump straight to - we agree. Our QA process has evolved over the last 4 to 5 years and, if anything, is stronger and more capable than it used to be. LaunchBox is a very open ended app and even just looking at a single feature, like game import, we have around 7 different import methods, each with many options, you are easily looking at 150+ different combinations, not to mention the fun that is OS and devices and the gazillion different ways people have their collections set up we really do try our best to hit the likely problem areas. We, admittedly, have been caught by a few larger issues that have really pushed us hard this year. The conflicts of video playback and platform changes (OS, .NET, and hardware). Since you're asking, we can easily share our QA process We test everything manually, in many different configurations (including new builds and existing builds) on many different pieces of hardware, including many computers, controllers, monitor setups, other popular toys (my messy desk speaks for itself), and many third party programs, tools, plugins, and themes that many users love to use in their setup (this always takes longer than the dev cycle by quite a far margin) We do structured/informed regression testing We do not have the pleasure of much automated testing for a variety of good (and some less good) reasons, so manual testing is where we get most value Every feature is released through the beta channel excepting those things we put into immediate hotfix from a recent public release Manual testing + a beta will never give us 100% confidence, nor can it. But it's the same but improved process every cycle we go through, with all the learnings we build. Bugs will definitely slip through and we'll definitely try to learn from those that do. The community, as it always has been, is invaluable to our process and we couldn't do this without you and we appreciate beyond measure this piece of it. Your thoughtful post definitely is a part of this. P.S. help us beta test if you don't already. A new beta will come out within the coming week(s) which you will find the thread here: https://forums.launchbox-app.com/forum/83-beta-testing/ Thanks again for the thoughtful reply — I really appreciate the transparency around your process. I can definitely see how complex LaunchBox is to test, especially with so many combinations and integrations. Automation in this kind of desktop environment isn’t straightforward and usually requires custom tooling or dedicated engineering time, so it makes sense that your focus is on thorough manual coverage. I’m curious how QA responsibilities are currently structured on your team. Is there a dedicated QA function or individual, or is testing primarily handled by developers and project leads before release? Sometimes having a separate QA role (even part-time) helps with consistency and documentation, especially when juggling multiple test environments and regression passes. I’m on the open betas and prefer to be on the cutting edge of the release window, so I’d be happy to provide feedback. One thing that could maximize the value of community testing would be a short beta test strategy — just a couple paragraphs highlighting key areas for testers to focus on. That way feedback is more targeted and actionable, and you can catch the highest-risk issues more efficiently. Quote
AstroBob Posted Tuesday at 01:39 AM Posted Tuesday at 01:39 AM Hiya, Thanks for the update there. You’re absolutely right that having a dedicated QA resource would bring a lot of value. At the moment, we don’t have the luxury of having a dedicated someone in that specific role, and so it's very much a team effort, but it is something we continue to consider as the app grows. As we take on larger projects and deal with more technical complexity, having that kind of focused support becomes more and more important. We also really appreciate that you’re willing to run the betas and give feedback. That kind of involvement helps us tremendously. While we do outline major changes and testing areas in the beta posts, we know this information is not always easy to spot. Since all beta communication happens through the forums, it can be easy to miss key details, which is far from ideal. We know we can improve here, in particular one thing we are actively looking at is bringing beta information directly into the app. That way, testers would see the latest updates, key areas to focus on, and known issues without needing to visit the forums. Think something similar to the `New Updates` window that pops up for each new release, but that is specific to the beta, what has changed, what to test, and links to where users can report feedback. I've gone ahead and scoped this out as a feature request on our side. It would be great to get your upvote and input on how we could make that better: 👉 https://feedback.launchbox.gg/p/beta-specific-update-window Thanks for keeping us on our toes with this, and we know there's definately room for improvement so we really do appreciate the feedback. Happy to try and elaborate further on anything 1 Quote
Gaming.Zone Posted Tuesday at 11:46 AM Author Posted Tuesday at 11:46 AM Thanks so much for the detailed follow-up — I really appreciate how open the team is about process and how receptive you are to feedback. Totally understand the challenge of not having a dedicated QA resource. It’s great to hear that testing is a team effort and that you’re aware of how valuable dedicated QA could be as the product grows in complexity. Even knowing that helps the community understand where and how to best contribute. The idea of a beta-specific update window inside the app is fantastic. That would go a long way toward making testing efforts more coordinated and focused — especially for people like me who jump on betas early and want to provide targeted feedback. I’ve upvoted the feature and added a note highlighting how a simple “focus areas” or “known issues” section could help testers zero in on what matters most each cycle. Thanks again for taking the time to engage on this — it’s encouraging to see such thoughtful consideration given to QA and community testing. Quote
AstroBob Posted Tuesday at 12:49 PM Posted Tuesday at 12:49 PM No worries at all, thanks for understanding, and we definately appreciate the feedback. It helps keep us grounded and lets us know where we need to focus our attention. Thanks so much for voting up the request and adding additional detail, it already seems to be getting some good traction and we agree that it would be great to have. We'll have a look at what that would take internally and will be sure to post any updates to that post. 1 Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.