Welcome to a new episode of our interview series! Quality assurance is an integral part of any software development process, right? Obviously, AdTech is no exception. To explore the nuances of QA in AdTech, we decided to interview Mykola Skyba, Attekmi’s automated quality assurance engineer with an eagle eye for bugs and errors.
So, are you ready for some insights? Then keep reading!
So, Mykola, before we dive deep into the nuances of quality assurance in AdTech, please tell us a bit more about your background. How did your career path lead you to AdTech?
I started out in QA doing manual testing — basic stuff like checking features, logging bugs, and making sure things did not break. Over time, I got more into automation because clicking the same button a hundred times is not exactly fun. I picked up Java, Selenium WebDriver, TestNG, REST Assured — also used Postman and some SQL when I needed to check data or hit APIs directly.
When I joined Attekmi, I stepped into the world of AdTech, and it hit differently. It is not just web pages and forms — it is fast auctions, tight integrations, and huge amounts of data moving around in milliseconds. You can’t just test if a button works; you need to make sure the data behind it actually makes sense and does not break three systems downstream.
My job as an AQA here is not just about writing scripts. I help plan the testing strategy, decide what is important to cover first, tweak tests to work with live traffic, read through logs when something is off, and constantly give feedback to the devs.
AdTech is tricky, yeah — but that is what keeps it interesting. You’ve got to understand how the system is built and how the business side works. And you must always keep learning because things change fast.
From your experience, what is the most challenging stage of the AdTech software testing life cycle, and why? How do you handle it?
Basically, the software testing life cycle is the same regardless of the industry you work in. I would say that test execution is the most critical stage since you have to check if the product is ready for release or not. You prepare the data, set up the environment, run tests automatically or manually, compare the results with your expectations, report errors, if any, and so on. Besides, that is an iterative process, and you may need to go through the test execution stage multiple times to identify and fix all the bugs.
As for the best practices, first of all, be patient. Test execution may be a pretty time-consuming process, and this is okay. It is also important to prioritize test cases according to their impact, keep proper documentation, and ensure clear internal communication. When possible, automate repetitive tests to save time, speed up the process, and reduce the probability of human error. I also recommend reviewing the testing process regularly – you may detect some optimization opportunities.
Let’s talk a bit about manual and automation testing. You are an AQA engineer; however, in your opinion, which approach is more effective in AdTech?
Actually, there is no universal answer to this question – everything depends on a specific situation, whether it is AdTech or any other industry. For instance, manual testing is effective when you need to run a test case only once or twice. At the same time, there is always a probability of human error – we all make mistakes sometimes.
In turn, automated testing works faster and is a good choice for repetitive test cases. However, some disadvantages are still here – it does not consider human factors and must be coded.
As you can see, there is no winner – again, everything depends on the specificity of your situation. In any case, a good QA engineer should be able to run both automated and manual tests and combine the approaches when necessary. This ensures greater test coverage and higher accuracy.
Among a wide range of testing types, which one is the most crucial in AdTech? Why?
The AdTech industry is always moving forward. New trends and technologies are continuously entering the stage, so developers need to regularly enhance their products with new features to keep them competitive. For instance, at Attekmi, we systematically collect feedback from our users to identify new features or improvements that should be implemented. And here comes the answer to the question – in AdTech, regression testing plays a critical role. It is used not only to check if the bugs are fixed. Regression testing is also leveraged to see if the system works properly after, for instance, a new functionality was added. It is like a safety net for your product.
How do you ensure effective testing when time is limited? Do you have any special approach?
The best thing you can do here is to set priorities. Actually, that is the most effective approach in any case, whether the deadline is tomorrow or in two months. Identify the level of risk associated with each functionality of your software. Those functionalities that belong to the high-risk group are obviously your top priority, so you test them first of all. Then, when you are done, move on to the middle-risk features. Low-risk functionalities have the lowest priority, so they are the last thing you should worry about when the deadline is coming.
Imagine the following scenario: the product has been released, and then it turns out that there is still a pretty serious bug. Have you faced such a situation? What is the best way to deal with it?
Well, the first thing that you should do is to inform both your team and product users. Yes, users may not be happy about it, but if you are honest about the situation, this will help you maintain loyalty. Tell them that you are working on the issue, explain its potential impact (make sure to avoid too complex technical jargon), and provide a realistic timeframe. Obviously, while you are fixing the bug, keep users posted about the progress. Keep everything transparent – that is the key to dealing with unpleasant situations.
Then, analyze the impact of the bug thoroughly. Define the affected features, set priorities, and start fixing your product. When you are done, test the fix to make sure that the product works as expected. After everything is ready, do not forget to inform users about this and provide clear guidelines on how to update their software, if necessary.
Last but not least, turn the situation into a specific benefit – learn from it. For instance, there may be gaps in your QA processes, and such an issue can help you identify them. Besides, it may be essential to improve your documentation or update development or QA tools. Fixing a bug without losing the trust of your users is great, but you need to prevent such problems from happening in the future.
I have faced a situation like that. Everything looked fine during testing, but after the release, an unexpected issue slipped through and affected part of the system. We had to act quickly — align with the team, figure out a short-term solution to keep things stable, and then work through the root cause step by step. It was not ideal, but we managed to handle it without a major impact. And more importantly, we used it as a learning opportunity — updated some of our internal practices, reviewed how we test similar scenarios, and added extra safeguards to avoid the same thing happening again.
What do you think about the growing impact of AI? Is it going to replace QA engineers at some point?
Nowadays, AI is virtually everywhere – it automates repetitive tasks, reduces the probability of human error, generates ad creatives… However, everyone is worrying about the future of their jobs in terms of the increasing influence of AI. In my opinion, QA testers and other professionals working in the AdTech industry have no reason to worry that much. AI testing tools already exist, but they will not replace testers (at least, not in the near future). AI still requires human supervision, so it can be a great ally, when used correctly, not a competitor.
However, I highly recommend QA engineers to keep an eye on the AI evolution and learn how to use AI tools for testing. The AdTech industry never stands still, and being able to leverage AI can become a significant competitive advantage.
What hard and soft skills are a must for a QA engineer?
Okay, let’s start with hard skills. First of all, you need to know how to code, and that is not only for greater career opportunities. The knowledge of programming languages (including the database ones) provides you with additional testing opportunities, enables you to detect errors more effectively, and streamlines your communication with the tech team since you speak the same language.
Secondly, automation testing. Manual testing is still in use (and I don’t think that it will leave the stage), but being able to run automated tests is still a must – this reduces costs and speeds up the whole process. And, obviously, you must be aware of all the nuances and stages of the software testing (and development in general) life cycle.
If we talk about soft skills, the most important one is communication. Software development is a collaborative effort, and you, as a part of the QA team, will have to report the detected problems in a clear way. Other skills that play a significant role are attention to detail, patience, time management, and a bit of curiosity and creativity. When you are able to look at the issue from different angles, it is easier to come up with the most effective solution.
Can you provide any recommendations for QA engineers working or planning to start a career in AdTech?
Well, the first recommendation that comes to my mind is that you need to learn continuously. Keep an eye on the AdTech trends and QA innovations. Enhance your skills. Learn additional programming languages. The more you know, the easier it is for you to work and grow. As for other tips… I would recommend the following:
Learn everything about the product you work with. That is the most basic recommendation for any software, but in AdTech, it plays an even more crucial role. AdTech solutions are rather complex – RTB capabilities, multiple integrations, high volume of transactions, latency sensitivity… To run tests effectively, you need to know the product from A to Z.
Use automation tools. Again, AdTech products are pretty complex, and doing everything manually will take loads of time. When possible, go for automation to accelerate the process and improve test coverage.
Prioritize clear and timely communication. You will collaborate with developers, product managers, account managers, and other experts. When proper communication and effective cross-functional collaboration are in place, it becomes easier to resolve issues promptly and integrate the QA processes into the overall development process.
Test in production. When it comes to AdTech products, some features can be fully tested only in the production environment due to the need for real traffic and load. Make sure to monitor the testing process thoroughly and have rollback mechanisms ready.
Enhance your log analysis skills. AdTech systems tend to have complex logic, so you need to be able to detect and fix issues that may take place during data handling or request processing stages.
We still have more insights from our amazing experts to share, so keep an eye on our interview series to miss nothing!
Besides, there are other ways to gain some knowledge from Attekmi – contact us to request AdTech and Ad Ops training!