over 2 years
ago -
Juber
-
Direct link
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Dear Forge Fans,
In recent times, we have received comments regarding A/B tests and how we conducted them. In particular, comments about the A/B tests for Forge Plus and the Daily Challenges update. Therefore, we thought it would be insightful for you to have more information on our approach to A/B tests and why these are integral to the development of Forge of Empires.
Let's begin with a definition of an A/B Test. An A/B Test is an industry-standard test often used by developers to monitor player reaction to a feature. With such tests, two or more groups are randomly selected. They consist of a test group (group 'A'), which will get access to a feature, and a control group (group 'B'), which does not. Through this comparison, developers can directly monitor the change in player behavior and assess the impact of the tested feature. We do this via both quantitative and qualitative feedback. And we must consider both, but please try to keep in mind that forum feedback is only one part of many areas that we receive feedback from.
Let's take the recent Daily Challenge update as an example. With both groups in mind, we can compare many aspects (including daily participation, user retention with the feature, ages of people participating, etc). That helps us establish whether this change was beneficial and attractive to our players as a new offering. If the results don't show a positive trend, we can dig in deeper and check if there was an issue. In the end, we can intervene and either adjust the feature or scratch it out completely.
It is why A/B tests are essential for us. They allow us to access more meaningful data based on a live testing environment. With this data, we can refine our features and identify potential issues.
Moreover, releasing a new feature to a smaller group of users allows us to monitor feedback more effectively and prevent the potential outpouring of frustration that could happen if we were to release features to the whole game population right away.
Forge of Empires is, of course, a big game. It is supported by hundreds of thousands of players across many Worlds. But that also means much can go wrong. A/B tests are an absolute must if you want to alleviate the potential risk of unveiling a new feature to such a large audience. The consequences of releasing an untested feature can cause negative interactions with the game - both for us and for you.
It is also important to emphasize that A/B tests are not selective. No players will maintain any type of long-term advantage or be pre-selected for the test group. You will have noticed there are some features that we can't A/B test because the advantages would be too large to be fair, and we do want to maintain fairness in the game. Selection for tests is purely random, and fairness is not the only reason for this; it is also because a randomized element is integral to the quality of the test results.
We hope this clarifies why the A/B tests are key to our development strategy and will remain a fixture of future updates.
Sincerely,
Your Forge Team.
Dear Forge Fans,
In recent times, we have received comments regarding A/B tests and how we conducted them. In particular, comments about the A/B tests for Forge Plus and the Daily Challenges update. Therefore, we thought it would be insightful for you to have more information on our approach to A/B tests and why these are integral to the development of Forge of Empires.
Let's begin with a definition of an A/B Test. An A/B Test is an industry-standard test often used by developers to monitor player reaction to a feature. With such tests, two or more groups are randomly selected. They consist of a test group (group 'A'), which will get access to a feature, and a control group (group 'B'), which does not. Through this comparison, developers can directly monitor the change in player behavior and assess the impact of the tested feature. We do this via both quantitative and qualitative feedback. And we must consider both, but please try to keep in mind that forum feedback is only one part of many areas that we receive feedback from.
Let's take the recent Daily Challenge update as an example. With both groups in mind, we can compare many aspects (including daily participation, user retention with the feature, ages of people participating, etc). That helps us establish whether this change was beneficial and attractive to our players as a new offering. If the results don't show a positive trend, we can dig in deeper and check if there was an issue. In the end, we can intervene and either adjust the feature or scratch it out completely.
It is why A/B tests are essential for us. They allow us to access more meaningful data based on a live testing environment. With this data, we can refine our features and identify potential issues.
Moreover, releasing a new feature to a smaller group of users allows us to monitor feedback more effectively and prevent the potential outpouring of frustration that could happen if we were to release features to the whole game population right away.
Forge of Empires is, of course, a big game. It is supported by hundreds of thousands of players across many Worlds. But that also means much can go wrong. A/B tests are an absolute must if you want to alleviate the potential risk of unveiling a new feature to such a large audience. The consequences of releasing an untested feature can cause negative interactions with the game - both for us and for you.
It is also important to emphasize that A/B tests are not selective. No players will maintain any type of long-term advantage or be pre-selected for the test group. You will have noticed there are some features that we can't A/B test because the advantages would be too large to be fair, and we do want to maintain fairness in the game. Selection for tests is purely random, and fairness is not the only reason for this; it is also because a randomized element is integral to the quality of the test results.
We hope this clarifies why the A/B tests are key to our development strategy and will remain a fixture of future updates.
Sincerely,
Your Forge Team.