Discussion:
[Cucumber] A way to get Rerun scenarios to no longer show up as failures in the reports
Samuel S
2015-04-20 23:57:41 UTC
Permalink
Hi all,
The option to rerun failed tests at the end is useful, but I don't see way
for it to properly report the results.

Expected Result: If I run 5 tests during the first round and 1 fails...
then the second round that I run it, AND the failed test passes, then the
results should show all 5 tests as passing

Actual Result: The results overwrite each other instead of smartly
appending to the past results, so the result now only shows that 1 test
passed and ignores the fact that 4 other tests passed during the previous
run.

mvn -e verify -Pintegration-tests -Dcucumber.options="@rerun.txt"


@RunWith(Cucumber.class)
@CucumberOptions(format = {"rerun:rerun.txt", "com.trulia.infra.WebDriverInitFormatter",
"json:target/cucumber.json","html:target/cucumber"})
public class RunCukesIT {
}


Advice about this appreciated
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cukes+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Paolo Ambrosio
2015-04-21 06:20:35 UTC
Permalink
Post by Samuel S
Hi all,
The option to rerun failed tests at the end is useful, but I don't see way
for it to properly report the results.
Expected Result: If I run 5 tests during the first round and 1 fails... then
the second round that I run it, AND the failed test passes, then the results
should show all 5 tests as passing
The rerun functionality AFAIK is meant to be used during development
to re-run tests.

"during the first round" makes me think that you are trying to fix
flickering tests by re-running them a few times till they pass. If
this is correct, have you considered the harder but more valuable task
of fixing the cause?
Post by Samuel S
Actual Result: The results overwrite each other instead of smartly appending
to the past results, so the result now only shows that 1 test passed and
ignores the fact that 4 other tests passed during the previous run.
@RunWith(Cucumber.class)
@CucumberOptions(format = {"rerun:rerun.txt",
"com.trulia.infra.WebDriverInitFormatter",
"json:target/cucumber.json","html:target/cucumber"})
public class RunCukesIT {
}
Advice about this appreciated
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cukes+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
aslak hellesoy
2015-04-21 07:11:42 UTC
Permalink
Post by Samuel S
Hi all,
The option to rerun failed tests at the end is useful, but I don't see way
for it to properly report the results.
Expected Result: If I run 5 tests during the first round and 1 fails...
then the second round that I run it, AND the failed test passes, then the
results should show all 5 tests as passing
What happened between the first and second run? Why is the scenario passing
now?
Post by Samuel S
Actual Result: The results overwrite each other instead of smartly
appending to the past results, so the result now only shows that 1 test
passed and ignores the fact that 4 other tests passed during the previous
run.
Why do you care about what happened in the first run? When you're using
--rerun to fix a bug, the purpose of Cucumber is to let you know if you
have fixed the bug/scenario, not generating a report for all the other
scenarios.

Besides, if reports were generated incrementally from several runs
(presumably with modifications in-between), we'd end up with reports that
can't be trusted.

Aslak
Post by Samuel S
@RunWith(Cucumber.class)
@CucumberOptions(format = {"rerun:rerun.txt", "com.trulia.infra.WebDriverInitFormatter",
"json:target/cucumber.json","html:target/cucumber"})
public class RunCukesIT {
}
Advice about this appreciated
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cukes+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Matt Metzger
2015-04-21 15:05:53 UTC
Permalink
Unfortunately you're not going to get a lot of help for that on this group
- everyone is going to just tell you that you're doing it wrong and you're
using cucumber wrong.
If this is something you need, my suggestion would be to write a rake task
that automatically performs reruns, and aggregates the passes / failures of
multiple runs into a consolidated result.
Post by Samuel S
Hi all,
The option to rerun failed tests at the end is useful, but I don't see way
for it to properly report the results.
Expected Result: If I run 5 tests during the first round and 1 fails...
then the second round that I run it, AND the failed test passes, then the
results should show all 5 tests as passing
Actual Result: The results overwrite each other instead of smartly
appending to the past results, so the result now only shows that 1 test
passed and ignores the fact that 4 other tests passed during the previous
run.
@RunWith(Cucumber.class)
@CucumberOptions(format = {"rerun:rerun.txt", "com.trulia.infra.WebDriverInitFormatter",
"json:target/cucumber.json","html:target/cucumber"})
public class RunCukesIT {
}
Advice about this appreciated
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cukes+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
aslak hellesoy
2015-04-21 15:10:17 UTC
Permalink
Post by Matt Metzger
Unfortunately you're not going to get a lot of help for that on this group
- everyone is going to just tell you that you're doing it wrong and you're
using cucumber wrong.
If this is something you need, my suggestion would be to write a rake task
that automatically performs reruns, and aggregates the passes / failures of
multiple runs into a consolidated result.
If you do that, the report you're generating will eventually show all
scenarios as passing, even if some of them would fail if you ran them
again. What's the purpose of such a report?

Aslak
Post by Matt Metzger
Post by Samuel S
Hi all,
The option to rerun failed tests at the end is useful, but I don't see
way for it to properly report the results.
Expected Result: If I run 5 tests during the first round and 1 fails...
then the second round that I run it, AND the failed test passes, then the
results should show all 5 tests as passing
Actual Result: The results overwrite each other instead of smartly
appending to the past results, so the result now only shows that 1 test
passed and ignores the fact that 4 other tests passed during the previous
run.
@RunWith(Cucumber.class)
@CucumberOptions(format = {"rerun:rerun.txt", "com.trulia.infra.WebDriverInitFormatter",
"json:target/cucumber.json","html:target/cucumber"})
public class RunCukesIT {
}
Advice about this appreciated
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cukes+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Matt Metzger
2015-04-21 16:01:25 UTC
Permalink
Yes, I understand what this report would show. I also assume Samuel
understands what this report would show, when he asked for a way to
generate this sort of report. I can't answer what the *purpose* of such a
report is, because I am not the one asking for it. I don't know what
problems Samuel is solving, or how he is using Cucumber to solve those
problems. It would be wrong of me to make assumptions about that, and tell
him that he is using the tool wrong.

There's no doubt about it - in most cases, it makes sense to figure out why
tests intermittently pass/fail, and address the root cause. Perhaps in
Samuel's case, there is a very small subset of intermittent tests, and his
team cannot justify the level of effort it would require to solve those. We
simply don't know.

If Samuel came here and said "I have some tests that sometimes fail and
sometimes pass, what should I do about this?" I would be echoing your
comments, but that's not what he asked for.
Post by aslak hellesoy
Post by Matt Metzger
Unfortunately you're not going to get a lot of help for that on this
group - everyone is going to just tell you that you're doing it wrong and
you're using cucumber wrong.
If this is something you need, my suggestion would be to write a rake
task that automatically performs reruns, and aggregates the passes /
failures of multiple runs into a consolidated result.
If you do that, the report you're generating will eventually show all
scenarios as passing, even if some of them would fail if you ran them
again. What's the purpose of such a report?
Aslak
Post by Matt Metzger
Post by Samuel S
Hi all,
The option to rerun failed tests at the end is useful, but I don't see
way for it to properly report the results.
Expected Result: If I run 5 tests during the first round and 1 fails...
then the second round that I run it, AND the failed test passes, then the
results should show all 5 tests as passing
Actual Result: The results overwrite each other instead of smartly
appending to the past results, so the result now only shows that 1 test
passed and ignores the fact that 4 other tests passed during the previous
run.
@RunWith(Cucumber.class)
@CucumberOptions(format = {"rerun:rerun.txt", "com.trulia.infra.WebDriverInitFormatter",
"json:target/cucumber.json","html:target/cucumber"})
public class RunCukesIT {
}
Advice about this appreciated
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cukes+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Aslak Hellesøy
2015-04-21 16:09:14 UTC
Permalink
Yes, I understand what this report would show. I also assume Samuel understands what this report would show, when he asked for a way to generate this sort of report. I can't answer what the purpose of such a report is, because I am not the one asking for it. I don't know what problems Samuel is solving, or how he is using Cucumber to solve those problems. It would be wrong of me to make assumptions about that, and tell him that he is using the tool wrong.
There's no doubt about it - in most cases, it makes sense to figure out why tests intermittently pass/fail, and address the root cause. Perhaps in Samuel's case, there is a very small subset of intermittent tests, and his team cannot justify the level of effort it would require to solve those. We simply don't know.
If Samuel came here and said "I have some tests that sometimes fail and sometimes pass, what should I do about this?" I would be echoing your comments, but that's not what he asked for.
My point is, that when you offer a solution (which you did) without knowing the problem someone is trying to solve, you might end up putting them in a worse place than they were before.

I can't think of a situation where aggregating a report from reruns would do any good, but I'm all ears.

Aslak
Unfortunately you're not going to get a lot of help for that on this group - everyone is going to just tell you that you're doing it wrong and you're using cucumber wrong.
If this is something you need, my suggestion would be to write a rake task that automatically performs reruns, and aggregates the passes / failures of multiple runs into a consolidated result.
If you do that, the report you're generating will eventually show all scenarios as passing, even if some of them would fail if you ran them again. What's the purpose of such a report?
Aslak
Post by Samuel S
Hi all,
The option to rerun failed tests at the end is useful, but I don't see way for it to properly report the results.
Expected Result: If I run 5 tests during the first round and 1 fails... then the second round that I run it, AND the failed test passes, then the results should show all 5 tests as passing
Actual Result: The results overwrite each other instead of smartly appending to the past results, so the result now only shows that 1 test passed and ignores the fact that 4 other tests passed during the previous run.
@RunWith(Cucumber.class)
@CucumberOptions(format = {"rerun:rerun.txt", "com.trulia.infra.WebDriverInitFormatter",
"json:target/cucumber.json","html:target/cucumber"})
public class RunCukesIT {
}
Advice about this appreciated
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cukes+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Samuel S
2015-05-09 07:18:02 UTC
Permalink
Hey guys,
Sorry for late response, I never got the initial notifications about this thread.

Why am I (and others on the web) asking for such a feature? Because the failures come not from our code, but from occasional hick ups in the third party tools we are using. For example, with a login test case, Appium will sometimes enter my login name as "Samule" and this will break the test. This will happen 1 out of 20 times. It will be incredibly counterproductive to try and predict and code around every possible failure a third party tool can provide us. This is why we would want an immediate test retry, and for that test to NOT count as a failure.

Thanks again for hearing us out.
PS I am using cucumber JVM
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cukes+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
aslak hellesoy
2015-05-09 08:08:08 UTC
Permalink
Post by Samuel S
Hey guys,
Sorry for late response, I never got the initial notifications about this thread.
Why am I (and others on the web) asking for such a feature? Because the
failures come not from our code, but from occasional hick ups in the third
party tools we are using. For example, with a login test case, Appium
will sometimes enter my login name as "Samule" and this will break the
test.
Ouch! I wouldn't want to use an automation library that has this kind of
bugs. Is there nothing better out there?
Post by Samuel S
This will happen 1 out of 20 times. It will be incredibly
counterproductive to try and predict and code around every possible failure
a third party tool can provide us. This is why we would want an immediate
test retry, and for that test to NOT count as a failure.
Thanks for providing context and a concrete example. That makes a big
difference.

How would you like to specify what to retry, and how many times it should
be retried?

Aslak
Post by Samuel S
Thanks again for hearing us out.
PS I am using cucumber JVM
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cukes+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Samuel S
2015-05-09 16:46:28 UTC
Permalink
This post might be inappropriate. Click to display it.
aslak hellesoy
2015-05-09 20:35:40 UTC
Permalink
Post by Samuel S
Post by aslak hellesoy
Post by Samuel S
Hey guys,
Sorry for late response, I never got the initial notifications about this thread.
Why am I (and others on the web) asking for such a feature? Because the
failures come not from our code, but from occasional hick ups in the third
party tools we are using. For example, with a login test case, Appium
will sometimes enter my login name as "Samule" and this will break the
test.
Ouch! I wouldn't want to use an automation library that has this kind of
bugs. Is there nothing better out there?
I'm not sure how familiar you are with the Automation QA market, but
Appium (and its older brother Selenium), are two of the hottest tools in
Silicon Valley. The odds of convincing management that it's worth the
investment of switching tools are slim to none.
I'm reasonably familiar with the automation market. I was one of the first
3 contributors to Selenium back in 2004 and have used it regularly since.
I've also written and contributed to a couple of other popular automation
tools, and I have published several books on the topic. I regularly deliver
training courses in BDD/Cucumber/automation.

I'm well aware that Appium is one of the most popular automation tools for
Android and iOS, but I haven't used it beyond simple examples. From my
friends who develop mobile apps I keep hearing it's quite buggy. I find it
hard to believe it's so buggy that it can't fill in text fields reliably
though. Do you have a source for that? A link to a bug report?

When a widely-adopted open source tool has severe bugs in basic
functionality, users will attempt to fix it. If bugs still don't get fixed
it's usually because of poor project management, or because the code is so
complex nobody knows how to fix it. What happens next is either a fork, or
a complete replacement by a new and better tool.

Companies switch to new and better tools and technologies all the time.
Companies where technology decisions are made by management and not
developers are usually the last ones to switch to a new technology.
Post by Samuel S
Post by aslak hellesoy
Post by Samuel S
This will happen 1 out of 20 times. It will be incredibly
counterproductive to try and predict and code around every possible failure
a third party tool can provide us. This is why we would want an immediate
test retry, and for that test to NOT count as a failure.
Thanks for providing context and a concrete example. That makes a big
difference.
How would you like to specify what to retry, and how many times it should
be retried?
Here is the exact example of how we are doing it for our Selenium tests
using TestNG. Note, the below code will execute an immediate rerun. You
could hardcode the re-run count, or pass it in as a jenkins build parameter.
http://seleniumproblemswithsolutions.blogspot.com/2012/10/how-to-immediate-rerun-failed-testcase.html
The problem with this approach is that it would apply to *all* failing
tests. That could have a pretty negative knock-on effect. If we were to add
support for this in Cucumber, it would have to use a mechanism that allows
users to easily target the automatic retry to specific scenarios.

It seems to me this would work best with a tagged after hook. Something
like this:

@After("@non-deterministic")
public void retry(Scenario scenario) {
if(scenario.getRetries() <= 3) scenario.retry();
}

Thoughts?

Aslak
Post by Samuel S
Post by aslak hellesoy
Aslak
Post by Samuel S
Thanks again for hearing us out.
PS I am using cucumber JVM
--
Posting rules: http://cukes.info/posting-rules.html
<http://www.google.com/url?q=http%3A%2F%2Fcukes.info%2Fposting-rules.html&sa=D&sntz=1&usg=AFQjCNF1mIwlqkAxjTlpbjWWad1edADQYA>
---
You received this message because you are subscribed to the Google
Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cukes+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Samuel S
2015-05-11 22:33:46 UTC
Permalink
Post by aslak hellesoy
Post by Samuel S
Post by aslak hellesoy
Post by Samuel S
Hey guys,
Sorry for late response, I never got the initial notifications about this thread.
Why am I (and others on the web) asking for such a feature? Because
the failures come not from our code, but from occasional hick ups in the
third party tools we are using. For example, with a login test case,
Appium will sometimes enter my login name as "Samule" and this will break
the test.
Ouch! I wouldn't want to use an automation library that has this kind of
bugs. Is there nothing better out there?
I'm not sure how familiar you are with the Automation QA market, but
Appium (and its older brother Selenium), are two of the hottest tools in
Silicon Valley. The odds of convincing management that it's worth the
investment of switching tools are slim to none.
I'm reasonably familiar with the automation market. I was one of the first
3 contributors to Selenium back in 2004 and have used it regularly since.
I've also written and contributed to a couple of other popular automation
tools, and I have published several books on the topic. I regularly deliver
training courses in BDD/Cucumber/automation.
I'm well aware that Appium is one of the most popular automation tools for
Android and iOS, but I haven't used it beyond simple examples. From my
friends who develop mobile apps I keep hearing it's quite buggy. I find it
hard to believe it's so buggy that it can't fill in text fields reliably
though. Do you have a source for that? A link to a bug report?
I don't have it for that specific issue, but here is one where appium has
truble clearing a text field https://github.com/appium/appium/issues/4565
Post by aslak hellesoy
When a widely-adopted open source tool has severe bugs in basic
functionality, users will attempt to fix it. If bugs still don't get fixed
it's usually because of poor project management, or because the code is so
complex nobody knows how to fix it. What happens next is either a fork, or
a complete replacement by a new and better tool.
Companies switch to new and better tools and technologies all the time.
Companies where technology decisions are made by management and not
developers are usually the last ones to switch to a new technology.
Post by Samuel S
Post by aslak hellesoy
Post by Samuel S
This will happen 1 out of 20 times. It will be incredibly
counterproductive to try and predict and code around every possible failure
a third party tool can provide us. This is why we would want an immediate
test retry, and for that test to NOT count as a failure.
Thanks for providing context and a concrete example. That makes a big
difference.
How would you like to specify what to retry, and how many times it
should be retried?
Here is the exact example of how we are doing it for our Selenium tests
using TestNG. Note, the below code will execute an immediate rerun. You
could hardcode the re-run count, or pass it in as a jenkins build parameter.
http://seleniumproblemswithsolutions.blogspot.com/2012/10/how-to-immediate-rerun-failed-testcase.html
The problem with this approach is that it would apply to *all* failing
tests. That could have a pretty negative knock-on effect. If we were to add
support for this in Cucumber, it would have to use a mechanism that allows
users to easily target the automatic retry to specific scenarios.
It seems to me this would work best with a tagged after hook. Something
@After("@non-deterministic")
public void retry(Scenario scenario) {
if(scenario.getRetries() <= 3) scenario.retry();
}
I am using cucumber 1.1.8 (having trouble with upgrading to 1.2.2) and I
don't see the option for scenario.getRetries() / scenario.retry();
is there a way to do this with cucumber 1.1.8?

Thanks
Post by aslak hellesoy
Thoughts?
Aslak
Post by Samuel S
Post by aslak hellesoy
Aslak
Post by Samuel S
Thanks again for hearing us out.
PS I am using cucumber JVM
--
Posting rules: http://cukes.info/posting-rules.html
<http://www.google.com/url?q=http%3A%2F%2Fcukes.info%2Fposting-rules.html&sa=D&sntz=1&usg=AFQjCNF1mIwlqkAxjTlpbjWWad1edADQYA>
---
You received this message because you are subscribed to the Google
Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cukes+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
aslak hellesoy
2015-05-11 23:30:41 UTC
Permalink
Post by Samuel S
Post by aslak hellesoy
Post by Samuel S
Post by aslak hellesoy
Post by Samuel S
Hey guys,
Sorry for late response, I never got the initial notifications about this thread.
Why am I (and others on the web) asking for such a feature? Because
the failures come not from our code, but from occasional hick ups in the
third party tools we are using. For example, with a login test case,
Appium will sometimes enter my login name as "Samule" and this will break
the test.
Ouch! I wouldn't want to use an automation library that has this kind
of bugs. Is there nothing better out there?
I'm not sure how familiar you are with the Automation QA market, but
Appium (and its older brother Selenium), are two of the hottest tools in
Silicon Valley. The odds of convincing management that it's worth the
investment of switching tools are slim to none.
I'm reasonably familiar with the automation market. I was one of the
first 3 contributors to Selenium back in 2004 and have used it regularly
since. I've also written and contributed to a couple of other popular
automation tools, and I have published several books on the topic. I
regularly deliver training courses in BDD/Cucumber/automation.
I'm well aware that Appium is one of the most popular automation tools
for Android and iOS, but I haven't used it beyond simple examples. From my
friends who develop mobile apps I keep hearing it's quite buggy. I find it
hard to believe it's so buggy that it can't fill in text fields reliably
though. Do you have a source for that? A link to a bug report?
I don't have it for that specific issue, but here is one where appium has
truble clearing a text field https://github.com/appium/appium/issues/4565
Post by aslak hellesoy
When a widely-adopted open source tool has severe bugs in basic
functionality, users will attempt to fix it. If bugs still don't get fixed
it's usually because of poor project management, or because the code is so
complex nobody knows how to fix it. What happens next is either a fork, or
a complete replacement by a new and better tool.
Companies switch to new and better tools and technologies all the time.
Companies where technology decisions are made by management and not
developers are usually the last ones to switch to a new technology.
Post by Samuel S
Post by aslak hellesoy
Post by Samuel S
This will happen 1 out of 20 times. It will be incredibly
counterproductive to try and predict and code around every possible failure
a third party tool can provide us. This is why we would want an immediate
test retry, and for that test to NOT count as a failure.
Thanks for providing context and a concrete example. That makes a big
difference.
How would you like to specify what to retry, and how many times it
should be retried?
Here is the exact example of how we are doing it for our Selenium tests
using TestNG. Note, the below code will execute an immediate rerun. You
could hardcode the re-run count, or pass it in as a jenkins build parameter.
http://seleniumproblemswithsolutions.blogspot.com/2012/10/how-to-immediate-rerun-failed-testcase.html
The problem with this approach is that it would apply to *all* failing
tests. That could have a pretty negative knock-on effect. If we were to add
support for this in Cucumber, it would have to use a mechanism that allows
users to easily target the automatic retry to specific scenarios.
It seems to me this would work best with a tagged after hook. Something
@After("@non-deterministic")
public void retry(Scenario scenario) {
if(scenario.getRetries() <= 3) scenario.retry();
}
I am using cucumber 1.1.8 (having trouble with upgrading to 1.2.2) and I
don't see the option for scenario.getRetries() / scenario.retry();
is there a way to do this with cucumber 1.1.8?
There is currently no API for automatic retries in any versions of Cucumber.
I was simply asking if you thought this API (if it existed) would be
suitable.

Aslak
Post by Samuel S
Thanks
Post by aslak hellesoy
Thoughts?
Aslak
Post by Samuel S
Post by aslak hellesoy
Aslak
Post by Samuel S
Thanks again for hearing us out.
PS I am using cucumber JVM
--
Posting rules: http://cukes.info/posting-rules.html
<http://www.google.com/url?q=http%3A%2F%2Fcukes.info%2Fposting-rules.html&sa=D&sntz=1&usg=AFQjCNF1mIwlqkAxjTlpbjWWad1edADQYA>
---
You received this message because you are subscribed to the Google
Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google
Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cukes+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Samuel S
2015-05-12 00:00:36 UTC
Permalink
Post by aslak hellesoy
Post by Samuel S
Post by aslak hellesoy
Post by Samuel S
Post by aslak hellesoy
Post by Samuel S
Hey guys,
Sorry for late response, I never got the initial notifications about
this thread.
Why am I (and others on the web) asking for such a feature? Because
the failures come not from our code, but from occasional hick ups in the
third party tools we are using. For example, with a login test case,
Appium will sometimes enter my login name as "Samule" and this will break
the test.
Ouch! I wouldn't want to use an automation library that has this kind
of bugs. Is there nothing better out there?
I'm not sure how familiar you are with the Automation QA market, but
Appium (and its older brother Selenium), are two of the hottest tools in
Silicon Valley. The odds of convincing management that it's worth the
investment of switching tools are slim to none.
I'm reasonably familiar with the automation market. I was one of the
first 3 contributors to Selenium back in 2004 and have used it regularly
since. I've also written and contributed to a couple of other popular
automation tools, and I have published several books on the topic. I
regularly deliver training courses in BDD/Cucumber/automation.
I'm well aware that Appium is one of the most popular automation tools
for Android and iOS, but I haven't used it beyond simple examples. From my
friends who develop mobile apps I keep hearing it's quite buggy. I find it
hard to believe it's so buggy that it can't fill in text fields reliably
though. Do you have a source for that? A link to a bug report?
I don't have it for that specific issue, but here is one where appium has
truble clearing a text field https://github.com/appium/appium/issues/4565
Post by aslak hellesoy
When a widely-adopted open source tool has severe bugs in basic
functionality, users will attempt to fix it. If bugs still don't get fixed
it's usually because of poor project management, or because the code is so
complex nobody knows how to fix it. What happens next is either a fork, or
a complete replacement by a new and better tool.
Companies switch to new and better tools and technologies all the time.
Companies where technology decisions are made by management and not
developers are usually the last ones to switch to a new technology.
Post by Samuel S
Post by aslak hellesoy
Post by Samuel S
This will happen 1 out of 20 times. It will be incredibly
counterproductive to try and predict and code around every possible failure
a third party tool can provide us. This is why we would want an immediate
test retry, and for that test to NOT count as a failure.
Thanks for providing context and a concrete example. That makes a big
difference.
How would you like to specify what to retry, and how many times it
should be retried?
Here is the exact example of how we are doing it for our Selenium tests
using TestNG. Note, the below code will execute an immediate rerun. You
could hardcode the re-run count, or pass it in as a jenkins build parameter.
http://seleniumproblemswithsolutions.blogspot.com/2012/10/how-to-immediate-rerun-failed-testcase.html
The problem with this approach is that it would apply to *all* failing
tests. That could have a pretty negative knock-on effect. If we were to add
support for this in Cucumber, it would have to use a mechanism that allows
users to easily target the automatic retry to specific scenarios.
It seems to me this would work best with a tagged after hook. Something
@After("@non-deterministic")
public void retry(Scenario scenario) {
if(scenario.getRetries() <= 3) scenario.retry();
}
I am using cucumber 1.1.8 (having trouble with upgrading to 1.2.2) and I
don't see the option for scenario.getRetries() / scenario.retry();
is there a way to do this with cucumber 1.1.8?
There is currently no API for automatic retries in any versions of Cucumber.
I was simply asking if you thought this API (if it existed) would be
suitable.
Aslak
Ahh gotcha. Yes, something like that would be great! As long as it reruns
it similar to how it would were it launched via the rerun.txt formatter
implementation which resets the app to a clean state before the rerun.
Post by aslak hellesoy
Post by Samuel S
Thanks
Post by aslak hellesoy
Thoughts?
Aslak
Post by Samuel S
Post by aslak hellesoy
Aslak
Post by Samuel S
Thanks again for hearing us out.
PS I am using cucumber JVM
--
Posting rules: http://cukes.info/posting-rules.html
<http://www.google.com/url?q=http%3A%2F%2Fcukes.info%2Fposting-rules.html&sa=D&sntz=1&usg=AFQjCNF1mIwlqkAxjTlpbjWWad1edADQYA>
---
You received this message because you are subscribed to the Google
Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google
Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cukes+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
a***@mobfox.com
2018-07-18 16:50:00 UTC
Permalink
Hello,

Any news on this topic ?
Post by Samuel S
Post by aslak hellesoy
Post by Samuel S
Post by aslak hellesoy
Post by Samuel S
Post by aslak hellesoy
Post by Samuel S
Hey guys,
Sorry for late response, I never got the initial notifications
about this thread.
Why am I (and others on the web) asking for such a feature? Because
the failures come not from our code, but from occasional hick ups in the
third party tools we are using. For example, with a login test case,
Appium will sometimes enter my login name as "Samule" and this will break
the test.
Ouch! I wouldn't want to use an automation library that has this kind
of bugs. Is there nothing better out there?
I'm not sure how familiar you are with the Automation QA market, but
Appium (and its older brother Selenium), are two of the hottest tools in
Silicon Valley. The odds of convincing management that it's worth the
investment of switching tools are slim to none.
I'm reasonably familiar with the automation market. I was one of the
first 3 contributors to Selenium back in 2004 and have used it regularly
since. I've also written and contributed to a couple of other popular
automation tools, and I have published several books on the topic. I
regularly deliver training courses in BDD/Cucumber/automation.
I'm well aware that Appium is one of the most popular automation tools
for Android and iOS, but I haven't used it beyond simple examples. From my
friends who develop mobile apps I keep hearing it's quite buggy. I find it
hard to believe it's so buggy that it can't fill in text fields reliably
though. Do you have a source for that? A link to a bug report?
I don't have it for that specific issue, but here is one where appium
has truble clearing a text field
https://github.com/appium/appium/issues/4565
Post by aslak hellesoy
When a widely-adopted open source tool has severe bugs in basic
functionality, users will attempt to fix it. If bugs still don't get fixed
it's usually because of poor project management, or because the code is so
complex nobody knows how to fix it. What happens next is either a fork, or
a complete replacement by a new and better tool.
Companies switch to new and better tools and technologies all the time.
Companies where technology decisions are made by management and not
developers are usually the last ones to switch to a new technology.
Post by Samuel S
Post by aslak hellesoy
Post by Samuel S
This will happen 1 out of 20 times. It will be incredibly
counterproductive to try and predict and code around every possible failure
a third party tool can provide us. This is why we would want an immediate
test retry, and for that test to NOT count as a failure.
Thanks for providing context and a concrete example. That makes a big
difference.
How would you like to specify what to retry, and how many times it
should be retried?
Here is the exact example of how we are doing it for our Selenium
tests using TestNG. Note, the below code will execute an immediate rerun.
You could hardcode the re-run count, or pass it in as a jenkins build
parameter.
http://seleniumproblemswithsolutions.blogspot.com/2012/10/how-to-immediate-rerun-failed-testcase.html
The problem with this approach is that it would apply to *all* failing
tests. That could have a pretty negative knock-on effect. If we were to add
support for this in Cucumber, it would have to use a mechanism that allows
users to easily target the automatic retry to specific scenarios.
It seems to me this would work best with a tagged after hook. Something
@After("@non-deterministic")
public void retry(Scenario scenario) {
if(scenario.getRetries() <= 3) scenario.retry();
}
I am using cucumber 1.1.8 (having trouble with upgrading to 1.2.2) and
I don't see the option for scenario.getRetries() / scenario.retry();
is there a way to do this with cucumber 1.1.8?
There is currently no API for automatic retries in any versions of Cucumber.
I was simply asking if you thought this API (if it existed) would be
suitable.
Aslak
Ahh gotcha. Yes, something like that would be great! As long as it reruns
it similar to how it would were it launched via the rerun.txt formatter
implementation which resets the app to a clean state before the rerun.
Post by aslak hellesoy
Post by Samuel S
Thanks
Post by aslak hellesoy
Thoughts?
Aslak
Post by Samuel S
Post by aslak hellesoy
Aslak
Post by Samuel S
Thanks again for hearing us out.
PS I am using cucumber JVM
--
Posting rules: http://cukes.info/posting-rules.html
<http://www.google.com/url?q=http%3A%2F%2Fcukes.info%2Fposting-rules.html&sa=D&sntz=1&usg=AFQjCNF1mIwlqkAxjTlpbjWWad1edADQYA>
---
You received this message because you are subscribed to the Google
Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google
Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google
Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cukes+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Austin Wilson
2018-07-26 21:40:32 UTC
Permalink
Hey I think this might help solve your
problem http://mkolisnyk.github.io/cucumber-reports/failed-tests-rerun

If you implement his code by changing a few things in your setup:

@RunWith(ExtendedCucumber.class)
@ExtendedCucumberOptions(
retryCount = 3,
detailedAggregatedReport = true
)
@CucumberOptions(
format = ["pretty", "json:Reports/Cucumber/TestResults.json"],
tags = ["@test"],
glue = "src/test/groovy",
features = "src/test/resources"


You can rerun failed tests and get a aggregated report which will not show
failures that were passed on additional tries
Post by a***@mobfox.com
Hello,
Any news on this topic ?
Post by Samuel S
Post by aslak hellesoy
Post by Samuel S
Post by aslak hellesoy
Post by Samuel S
Post by aslak hellesoy
Post by Samuel S
Hey guys,
Sorry for late response, I never got the initial notifications
about this thread.
Why am I (and others on the web) asking for such a feature?
Because the failures come not from our code, but from occasional hick ups
in the third party tools we are using. For example, with a login test
case, Appium will sometimes enter my login name as "Samule" and this will
break the test.
Ouch! I wouldn't want to use an automation library that has this
kind of bugs. Is there nothing better out there?
I'm not sure how familiar you are with the Automation QA market, but
Appium (and its older brother Selenium), are two of the hottest tools in
Silicon Valley. The odds of convincing management that it's worth the
investment of switching tools are slim to none.
I'm reasonably familiar with the automation market. I was one of the
first 3 contributors to Selenium back in 2004 and have used it regularly
since. I've also written and contributed to a couple of other popular
automation tools, and I have published several books on the topic. I
regularly deliver training courses in BDD/Cucumber/automation.
I'm well aware that Appium is one of the most popular automation tools
for Android and iOS, but I haven't used it beyond simple examples. From my
friends who develop mobile apps I keep hearing it's quite buggy. I find it
hard to believe it's so buggy that it can't fill in text fields reliably
though. Do you have a source for that? A link to a bug report?
I don't have it for that specific issue, but here is one where appium
has truble clearing a text field
https://github.com/appium/appium/issues/4565
Post by aslak hellesoy
When a widely-adopted open source tool has severe bugs in basic
functionality, users will attempt to fix it. If bugs still don't get fixed
it's usually because of poor project management, or because the code is so
complex nobody knows how to fix it. What happens next is either a fork, or
a complete replacement by a new and better tool.
Companies switch to new and better tools and technologies all the
time. Companies where technology decisions are made by management and not
developers are usually the last ones to switch to a new technology.
Post by Samuel S
Post by aslak hellesoy
Post by Samuel S
This will happen 1 out of 20 times. It will be incredibly
counterproductive to try and predict and code around every possible failure
a third party tool can provide us. This is why we would want an immediate
test retry, and for that test to NOT count as a failure.
Thanks for providing context and a concrete example. That makes a
big difference.
How would you like to specify what to retry, and how many times it
should be retried?
Here is the exact example of how we are doing it for our Selenium
tests using TestNG. Note, the below code will execute an immediate rerun.
You could hardcode the re-run count, or pass it in as a jenkins build
parameter.
http://seleniumproblemswithsolutions.blogspot.com/2012/10/how-to-immediate-rerun-failed-testcase.html
The problem with this approach is that it would apply to *all* failing
tests. That could have a pretty negative knock-on effect. If we were to add
support for this in Cucumber, it would have to use a mechanism that allows
users to easily target the automatic retry to specific scenarios.
It seems to me this would work best with a tagged after hook.
@After("@non-deterministic")
public void retry(Scenario scenario) {
if(scenario.getRetries() <= 3) scenario.retry();
}
I am using cucumber 1.1.8 (having trouble with upgrading to 1.2.2) and
I don't see the option for scenario.getRetries() / scenario.retry();
is there a way to do this with cucumber 1.1.8?
There is currently no API for automatic retries in any versions of Cucumber.
I was simply asking if you thought this API (if it existed) would be
suitable.
Aslak
Ahh gotcha. Yes, something like that would be great! As long as it reruns
it similar to how it would were it launched via the rerun.txt formatter
implementation which resets the app to a clean state before the rerun.
Post by aslak hellesoy
Post by Samuel S
Thanks
Post by aslak hellesoy
Thoughts?
Aslak
Post by Samuel S
Post by aslak hellesoy
Aslak
Post by Samuel S
Thanks again for hearing us out.
PS I am using cucumber JVM
--
Posting rules: http://cukes.info/posting-rules.html
<http://www.google.com/url?q=http%3A%2F%2Fcukes.info%2Fposting-rules.html&sa=D&sntz=1&usg=AFQjCNF1mIwlqkAxjTlpbjWWad1edADQYA>
---
You received this message because you are subscribed to the Google
Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google
Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google
Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/d/optout.
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cukes+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Paolo Ambrosio
2015-05-09 08:11:55 UTC
Permalink
Post by Samuel S
Hey guys,
Sorry for late response, I never got the initial notifications about this thread.
Why am I (and others on the web) asking for such a feature? Because the failures come not from our code, but from occasional hick ups in the third party tools we are using. For example, with a login test case, Appium will sometimes enter my login name as "Samule" and this will break the test. This will happen 1 out of 20 times. It will be incredibly counterproductive to try and predict and code around every possible failure a third party tool can provide us. This is why we would want an immediate test retry, and for that test to NOT count as a failure.
So if the issue is within Appium, why don't you try and fix that
instead of asking for a workaround in Cucumber?

If you are just looking for a workaround, you can probably do what
Matt suggested (but with a JVM build tool instead). If you are looking
for a solution, I'm afraid you'll have to fix the automation library.
Post by Samuel S
Thanks again for hearing us out.
PS I am using cucumber JVM
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cukes+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Björn Rasmusson
2015-05-09 12:47:41 UTC
Permalink
Post by Samuel S
Post by Samuel S
Hi all,
The option to rerun failed tests at the end is useful, but I don't see
way
Post by Samuel S
for it to properly report the results.
As Paolo says, there is not really an option in Cucumber-JVM to rerun
failed test at the end. The rerun formatter produce a list of the scenarios
that failed which can be used later to run those tests an no other. But
yes, it is possible to use that file produces by the rerun formatter to run
the failed test immediately after the the run that produced that rerun file.
Post by Samuel S
Post by Samuel S
Expected Result: If I run 5 tests during the first round and 1 fails...
then
Post by Samuel S
the second round that I run it, AND the failed test passes, then the
results
Post by Samuel S
should show all 5 tests as passing
The rerun functionality AFAIK is meant to be used during development
to re-run tests.
"during the first round" makes me think that you are trying to fix
flickering tests by re-running them a few times till they pass. If
this is correct, have you considered the harder but more valuable task
of fixing the cause?
Post by Samuel S
Actual Result: The results overwrite each other instead of smartly
appending
Post by Samuel S
to the past results, so the result now only shows that 1 test passed and
ignores the fact that 4 other tests passed during the previous run.
What you would like to use is:
mvn -e verify -Pintegration-tests -Dcucumber.options="--plugin
json:rerun_result.json @rerun.txt"
so that the result form the second execution of Cucumber-JVM end up in
"rerun_result.json"
Currently this doesn't quite work, because Cucumber-JVM write the result
from the second execution of Cucumber-JVM also in the reports specified in
the @CucumberOptions annotation.
If PR #860 <https://github.com/cucumber/cucumber-jvm/pull/860> is accepted,
that problem will be fixed.

Still you will get two reports, one from the first execution of
Cucumber-JVM and one from the second execution of Cucumber-JVM (rerunning
the scenarios in the rerun file), and you will have to process them to get
the combined result (therefore I indicated the use of json reports, to get
something to be able to automatically process).
Post by Samuel S
Post by Samuel S
@RunWith(Cucumber.class)
@CucumberOptions(format = {"rerun:rerun.txt",
"com.trulia.infra.WebDriverInitFormatter",
"json:target/cucumber.json","html:target/cucumber"})
public class RunCukesIT {
}
Advice about this appreciated
Hey guys,
Sorry for late response, I never got the initial notifications about
this thread.
Post by Samuel S
Why am I (and others on the web) asking for such a feature? Because the
failures come not from our code, but from occasional hick ups in the third
party tools we are using. For example, with a login test case, Appium
will sometimes enter my login name as "Samule" and this will break the
test. This will happen 1 out of 20 times. It will be incredibly
counterproductive to try and predict and code around every possible failure
a third party tool can provide us. This is why we would want an immediate
test retry, and for that test to NOT count as a failure.
So if the issue is within Appium, why don't you try and fix that
instead of asking for a workaround in Cucumber?
It seems like the step definition for the login step is the place to check
that the login was performed correctly and if necessary redo the login.
When possible, it seems to be better to handle Appium issues there, rather
the rerun the whole scenario when Appium is misbehaving.

Best Regards
Björn
Post by Samuel S
If you are just looking for a workaround, you can probably do what
Matt suggested (but with a JVM build tool instead). If you are looking
for a solution, I'm afraid you'll have to fix the automation library.
Post by Samuel S
Thanks again for hearing us out.
PS I am using cucumber JVM
--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cukes+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Loading...