Discussion:
[Qemu-devel] [PATCH 11/42] job: Add job_delete()
Max Reitz
2018-05-11 22:56:29 UTC
Permalink
This moves freeing the Job object and its fields from block_job_unref()
to job_delete().
---
include/qemu/job.h | 3 +++
blockjob.c | 3 +--
job.c | 6 ++++++
3 files changed, 10 insertions(+), 2 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-11 22:53:52 UTC
Permalink
This moves the job_type field from BlockJobDriver to JobDriver.
---
include/block/blockjob_int.h | 3 ---
include/qemu/job.h | 11 +++++++++++
block/backup.c | 2 +-
block/commit.c | 2 +-
block/mirror.c | 4 ++--
block/stream.c | 2 +-
blockjob.c | 16 +++++++---------
job.c | 10 ++++++++++
8 files changed, 33 insertions(+), 17 deletions(-)
[...]
diff --git a/include/qemu/job.h b/include/qemu/job.h
index b4b49f19e1..c87e951c8a 100644
--- a/include/qemu/job.h
+++ b/include/qemu/job.h
[...]
@@ -57,4 +62,10 @@ struct JobDriver {
*/
void *job_create(const char *job_id, const JobDriver *driver, Error **errp);
+/** Returns the JobType of a given Job. */
+JobType job_type(Job *job);
+
+/** Returns the enum string for the JobType of a given Job. */
+const char *job_type_str(Job *job);
+
Is there a good reason for these not to take a const Job *?
#endif
Kevin Wolf
2018-05-14 11:31:00 UTC
Permalink
Post by Max Reitz
This moves the job_type field from BlockJobDriver to JobDriver.
---
include/block/blockjob_int.h | 3 ---
include/qemu/job.h | 11 +++++++++++
block/backup.c | 2 +-
block/commit.c | 2 +-
block/mirror.c | 4 ++--
block/stream.c | 2 +-
blockjob.c | 16 +++++++---------
job.c | 10 ++++++++++
8 files changed, 33 insertions(+), 17 deletions(-)
[...]
diff --git a/include/qemu/job.h b/include/qemu/job.h
index b4b49f19e1..c87e951c8a 100644
--- a/include/qemu/job.h
+++ b/include/qemu/job.h
[...]
@@ -57,4 +62,10 @@ struct JobDriver {
*/
void *job_create(const char *job_id, const JobDriver *driver, Error **errp);
+/** Returns the JobType of a given Job. */
+JobType job_type(Job *job);
+
+/** Returns the enum string for the JobType of a given Job. */
+const char *job_type_str(Job *job);
+
Is there a good reason for these not to take a const Job *?
Not really. I'll change it and take your R-b.

Kevin
John Snow
2018-05-14 20:12:01 UTC
Permalink
Post by Max Reitz
This moves the job_type field from BlockJobDriver to JobDriver.
---
include/block/blockjob_int.h | 3 ---
include/qemu/job.h | 11 +++++++++++
block/backup.c | 2 +-
block/commit.c | 2 +-
block/mirror.c | 4 ++--
block/stream.c | 2 +-
blockjob.c | 16 +++++++---------
job.c | 10 ++++++++++
8 files changed, 33 insertions(+), 17 deletions(-)
[...]
diff --git a/include/qemu/job.h b/include/qemu/job.h
index b4b49f19e1..c87e951c8a 100644
--- a/include/qemu/job.h
+++ b/include/qemu/job.h
[...]
@@ -57,4 +62,10 @@ struct JobDriver {
*/
void *job_create(const char *job_id, const JobDriver *driver, Error **errp);
+/** Returns the JobType of a given Job. */
+JobType job_type(Job *job);
+
+/** Returns the enum string for the JobType of a given Job. */
+const char *job_type_str(Job *job);
+
Is there a good reason for these not to take a const Job *?
#endif
Max is just very excited to talk about Rust some more, I can tell.
Max Reitz
2018-05-11 22:48:59 UTC
Permalink
QAPI types aren't externally visible, so we can rename them without
causing problems. Before we add a job type to Job, rename the enum
so it can be used for more than just block jobs.
---
qapi/block-core.json | 14 +++++++-------
include/block/blockjob_int.h | 2 +-
block/backup.c | 2 +-
block/commit.c | 2 +-
block/mirror.c | 4 ++--
block/stream.c | 2 +-
blockjob.c | 6 +++---
7 files changed, 16 insertions(+), 16 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-11 22:46:46 UTC
Permalink
This is the first step towards creating an infrastructure for generic
background jobs that aren't tied to a block device. For now, Job only
stores its ID and JobDriver, the rest stays in BlockJob.
The following patches will move over more parts of BlockJob to Job if
they are meaningful outside the context of a block job.
---
include/block/blockjob.h | 9 +++----
include/block/blockjob_int.h | 4 +--
include/qemu/job.h | 60 ++++++++++++++++++++++++++++++++++++++++++++
block/backup.c | 4 ++-
block/commit.c | 4 ++-
block/mirror.c | 10 +++++---
block/stream.c | 4 ++-
blockjob.c | 46 ++++++++++++++++-----------------
job.c | 48 +++++++++++++++++++++++++++++++++++
tests/test-bdrv-drain.c | 4 ++-
tests/test-blockjob-txn.c | 4 ++-
tests/test-blockjob.c | 12 ++++++---
MAINTAINERS | 2 ++
Makefile.objs | 2 +-
14 files changed, 169 insertions(+), 44 deletions(-)
create mode 100644 include/qemu/job.h
create mode 100644 job.c
diff --git a/include/block/blockjob.h b/include/block/blockjob.h
index 0b57d53084..8acc1a236a 100644
--- a/include/block/blockjob.h
+++ b/include/block/blockjob.h
[...]
@@ -40,6 +41,9 @@ typedef struct BlockJobTxn BlockJobTxn;
* Long-running operation on a BlockDriverState.
*/
typedef struct BlockJob {
+ /** Data belonging to the generic Job infrastructure */
+ Job job;
+
/** The job type, including the job vtable. */
const BlockJobDriver *driver;
Any reason why you keep this field around? Shouldn't it be just
DO_UPCAST(const BlockJobDriver, job_driver, job.driver)?
@@ -47,11 +51,6 @@ typedef struct BlockJob {
BlockBackend *blk;
/**
- * The ID of the block job. May be NULL for internal jobs.
- */
- char *id;
-
- /**
* The coroutine that executes the job. If not NULL, it is
* reentered when busy is false and the job is cancelled.
*/
[...]
diff --git a/blockjob.c b/blockjob.c
index 9943218a34..35d604d05a 100644
--- a/blockjob.c
+++ b/blockjob.c
[...]
@@ -247,7 +247,7 @@ void block_job_unref(BlockJob *job)
block_job_detach_aio_context, job);
blk_unref(job->blk);
error_free(job->blocker);
- g_free(job->id);
+ g_free(job->job.id);
Err. OK. I put my faith in patch 11.

Reviewed-by: Max Reitz <***@redhat.com>

Although I do wonder about BlockJob.driver. It seems to stay even after
this series...
assert(!timer_pending(&job->sleep_timer));
g_free(job);
}
[...]
diff --git a/MAINTAINERS b/MAINTAINERS
index 459e3594e1..a97f60d104 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1369,6 +1369,8 @@ L: qemu-***@nongnu.org
(Cut off here:

M: Jeff Cody <***@redhat.com>

Clever!)
S: Supported
F: blockjob.c
F: include/block/blockjob.h
+F: job.c
+F: include/block/job.h
F: block/backup.c
F: block/commit.c
F: block/stream.c
Kevin Wolf
2018-05-14 12:58:19 UTC
Permalink
Post by Max Reitz
This is the first step towards creating an infrastructure for generic
background jobs that aren't tied to a block device. For now, Job only
stores its ID and JobDriver, the rest stays in BlockJob.
The following patches will move over more parts of BlockJob to Job if
they are meaningful outside the context of a block job.
---
include/block/blockjob.h | 9 +++----
include/block/blockjob_int.h | 4 +--
include/qemu/job.h | 60 ++++++++++++++++++++++++++++++++++++++++++++
block/backup.c | 4 ++-
block/commit.c | 4 ++-
block/mirror.c | 10 +++++---
block/stream.c | 4 ++-
blockjob.c | 46 ++++++++++++++++-----------------
job.c | 48 +++++++++++++++++++++++++++++++++++
tests/test-bdrv-drain.c | 4 ++-
tests/test-blockjob-txn.c | 4 ++-
tests/test-blockjob.c | 12 ++++++---
MAINTAINERS | 2 ++
Makefile.objs | 2 +-
14 files changed, 169 insertions(+), 44 deletions(-)
create mode 100644 include/qemu/job.h
create mode 100644 job.c
diff --git a/include/block/blockjob.h b/include/block/blockjob.h
index 0b57d53084..8acc1a236a 100644
--- a/include/block/blockjob.h
+++ b/include/block/blockjob.h
[...]
@@ -40,6 +41,9 @@ typedef struct BlockJobTxn BlockJobTxn;
* Long-running operation on a BlockDriverState.
*/
typedef struct BlockJob {
+ /** Data belonging to the generic Job infrastructure */
+ Job job;
+
/** The job type, including the job vtable. */
const BlockJobDriver *driver;
Any reason why you keep this field around? Shouldn't it be just
DO_UPCAST(const BlockJobDriver, job_driver, job.driver)?
I left it around to avoid unnecessary churn at this point. However, I
intended to remove it at the end of the series, which I seem to have
forgotten. I'll add a patch to this effect.

Kevin
John Snow
2018-05-11 22:29:24 UTC
Permalink
Every job gets a non-NULL job->txn on creation, but it doesn't
necessarily keep it until it is decommissioned: Finalising a job removes
it from its transaction. Therefore, calling 'blockdev-job-finalize' a
second time on an already concluded job causes an assertion failure.
Remove job->txn from the assertion in block_job_finalize() to fix this.
block_job_do_finalize() still has the same assertion, but if a job is
already removed from its transaction, block_job_apply_verb() will
already error out before we run into that assertion.
---
blockjob.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/blockjob.c b/blockjob.c
index 4de48166b2..b38ed7e265 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -702,7 +702,7 @@ void block_job_complete(BlockJob *job, Error **errp)
void block_job_finalize(BlockJob *job, Error **errp)
{
- assert(job && job->id && job->txn);
+ assert(job && job->id);
if (block_job_apply_verb(job, BLOCK_JOB_VERB_FINALIZE, errp)) {
return;
}
Oh, ack. Good catch.
Max Reitz
2018-05-11 22:30:09 UTC
Permalink
Commit 81193349 removed the only use of block_job_pause/resume_all(),
which was in bdrv_drain_all(). The functions are now unused and can be
removed.
I have a strange liking for all-digit commit hash prefixes.
---
include/block/blockjob_int.h | 14 --------------
blockjob.c | 27 ---------------------------
2 files changed, 41 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
John Snow
2018-05-14 19:44:57 UTC
Permalink
Post by Max Reitz
Commit 81193349 removed the only use of block_job_pause/resume_all(),
which was in bdrv_drain_all(). The functions are now unused and can be
removed.
I have a strange liking for all-digit commit hash prefixes.
---
include/block/blockjob_int.h | 14 --------------
blockjob.c | 27 ---------------------------
2 files changed, 41 deletions(-)
And all deletion diffstats.
Reviewed-by: John Snow <***@redhat.com>
Eric Blake
2018-05-14 19:45:24 UTC
Permalink
Post by Max Reitz
Commit 81193349 removed the only use of block_job_pause/resume_all(),
which was in bdrv_drain_all(). The functions are now unused and can be
removed.
I have a strange liking for all-digit commit hash prefixes.
Especially when the first digit is 0, but then the rest includes an 8 or
9. (in the past, I've actually had a script transiently break when
parsing commit ids, and it took me quite a while to figure out why the
problem disappeared after I rebased, until I finally realized that it
was because the script was choking on an attempt to parse an invalid
octal number, only when I got lucky enough to land on a problematic
commit id)
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
Max Reitz
2018-05-11 22:28:32 UTC
Permalink
The backup block job directly accesses the driver field in BlockJob. Add
a wrapper for getting it.
---
include/block/blockjob.h | 7 +++++++
block/backup.c | 8 +++++---
blockjob.c | 5 +++++
3 files changed, 17 insertions(+), 3 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-11 22:25:28 UTC
Permalink
This gets us rid of more direct accesses to BlockJob fields from the
job drivers.
---
include/block/blockjob_int.h | 8 ++++++++
block/backup.c | 18 +++++++-----------
block/commit.c | 4 ++--
block/mirror.c | 5 +----
block/stream.c | 4 ++--
blockjob.c | 9 +++++++++
6 files changed, 29 insertions(+), 19 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-11 22:19:50 UTC
Permalink
All block job drivers support .set_speed and all of them duplicate the
same code to implement it. Move that code to blockjob.c and remove the
now useless callback.
---
include/block/blockjob.h | 2 ++
include/block/blockjob_int.h | 3 ---
block/backup.c | 13 -------------
block/commit.c | 14 --------------
block/mirror.c | 14 --------------
block/stream.c | 14 --------------
blockjob.c | 12 ++++--------
7 files changed, 6 insertions(+), 66 deletions(-)
diff --git a/include/block/blockjob.h b/include/block/blockjob.h
index 22bf418209..5aa8a6aaec 100644
--- a/include/block/blockjob.h
+++ b/include/block/blockjob.h
@@ -29,6 +29,8 @@
#include "block/block.h"
#include "qemu/ratelimit.h"
+#define SLICE_TIME 100000000ULL /* ns */
+
typedef struct BlockJobDriver BlockJobDriver;
typedef struct BlockJobTxn BlockJobTxn;
SLICE_TIME can be anything. I don't like something that can be anything
to be in a header file. I can see that you still need it in mirror, so
it needs to be in a header; but maybe rename it to...
THROTTLE_SLICE_TIME? At least JOB_SLICE_TIME?

Apart from that:

Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-11 22:14:02 UTC
Permalink
Every block job has a RateLimit, and they all do the exact same thing
with it, so it should be common infrastructure. Move the struct field
for a start.
---
include/block/blockjob.h | 4 ++++
block/backup.c | 5 ++---
block/commit.c | 5 ++---
block/mirror.c | 6 +++---
block/stream.c | 5 ++---
5 files changed, 13 insertions(+), 12 deletions(-)
Instead of finally getting rid of block job throttling, you make it
central functionality? Bah, I say, bah! ;-)

Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-11 22:12:19 UTC
Permalink
Block job drivers are not expected to mess with the internals of the
BlockJob object, so provide wrapper functions for one of the cases where
they still do it: Updating the progress counter.
---
include/block/blockjob.h | 19 +++++++++++++++++++
block/backup.c | 22 +++++++++++++---------
block/commit.c | 16 ++++++++--------
block/mirror.c | 11 +++++------
block/stream.c | 14 ++++++++------
blockjob.c | 10 ++++++++++
6 files changed, 63 insertions(+), 29 deletions(-)
[...]
diff --git a/block/backup.c b/block/backup.c
index 453cd62c24..5d95805472 100644
--- a/block/backup.c
+++ b/block/backup.c
[...]
@@ -420,8 +421,9 @@ static void backup_incremental_init_copy_bitmap(BackupBlockJob *job)
bdrv_set_dirty_iter(dbi, next_cluster * job->cluster_size);
}
- job->common.offset = job->common.len -
- hbitmap_count(job->copy_bitmap) * job->cluster_size;
+ /* TODO block_job_progress_set_remaining() would make more sense */
Extremely true, especially considering that at least there was an
assignment before.
+ block_job_progress_update(&job->common,
+ job->len - hbitmap_count(job->copy_bitmap) * job->cluster_size);
Now, with an incremental update, you have to know that the progress was
0 before this call to make any sense of it.

I could ask: Why don't you just resolve the TODO immediately with

block_job_progress_set_remaining(&job->common,
hbitmap_count(job->copy_bitmap) * job->cluster_size);

?

I suppose one possible answer is that this series has 42 patches as it
is, but I have to say that it took me more time to figure this hunk out
than it would have taken me to acknowledge the above change.

Considering that job->len and job->common.len are now separate after
this patch, and that there is only a single other
block_job_progress_update() call in this file, I can't see any side effects.
bdrv_dirty_iter_free(dbi);
}
[...]
diff --git a/block/mirror.c b/block/mirror.c
index 99da9c0858..77ee9b1791 100644
--- a/block/mirror.c
+++ b/block/mirror.c
[...]
@@ -792,11 +792,10 @@ static void coroutine_fn mirror_run(void *opaque)
block_job_pause_point(&s->common);
cnt = bdrv_get_dirty_count(s->dirty_bitmap);
- /* s->common.offset contains the number of bytes already processed so
- * far, cnt is the number of dirty bytes remaining and
- * s->bytes_in_flight is the number of bytes currently being
- * processed; together those are the current total operation length */
- s->common.len = s->common.offset + s->bytes_in_flight + cnt;
+ /* cnt is the number of dirty bytes remaining and s->bytes_in_flight is
+ * the number of bytes currently being processed; together those are
+ * the current total operation length */
No, together, those are the current remaining operation length.
+ block_job_progress_set_remaining(&s->common, s->bytes_in_flight + cnt);
/* Note that even when no rate limit is applied we need to yield
* periodically with no pending I/O so that bdrv_drain_all() returns.
Kevin Wolf
2018-05-14 10:16:36 UTC
Permalink
Post by Max Reitz
Block job drivers are not expected to mess with the internals of the
BlockJob object, so provide wrapper functions for one of the cases where
they still do it: Updating the progress counter.
---
include/block/blockjob.h | 19 +++++++++++++++++++
block/backup.c | 22 +++++++++++++---------
block/commit.c | 16 ++++++++--------
block/mirror.c | 11 +++++------
block/stream.c | 14 ++++++++------
blockjob.c | 10 ++++++++++
6 files changed, 63 insertions(+), 29 deletions(-)
[...]
diff --git a/block/backup.c b/block/backup.c
index 453cd62c24..5d95805472 100644
--- a/block/backup.c
+++ b/block/backup.c
[...]
@@ -420,8 +421,9 @@ static void backup_incremental_init_copy_bitmap(BackupBlockJob *job)
bdrv_set_dirty_iter(dbi, next_cluster * job->cluster_size);
}
- job->common.offset = job->common.len -
- hbitmap_count(job->copy_bitmap) * job->cluster_size;
+ /* TODO block_job_progress_set_remaining() would make more sense */
Extremely true, especially considering that at least there was an
assignment before.
+ block_job_progress_update(&job->common,
+ job->len - hbitmap_count(job->copy_bitmap) * job->cluster_size);
Now, with an incremental update, you have to know that the progress was
0 before this call to make any sense of it.
I could ask: Why don't you just resolve the TODO immediately with
block_job_progress_set_remaining(&job->common,
hbitmap_count(job->copy_bitmap) * job->cluster_size);
?
I suppose one possible answer is that this series has 42 patches as it
is, but I have to say that it took me more time to figure this hunk out
than it would have taken me to acknowledge the above change.
Considering that job->len and job->common.len are now separate after
this patch, and that there is only a single other
block_job_progress_update() call in this file, I can't see any side effects.
Basically just because I tried to make the naive change whenever I had
to touch something that isn't what the patch changes as its main
purpose. The old code changed offset rather than len, so I used the
function that does the same.

If I reduced len instead of increasing offset, I suppose that at least a
few test cases would have to be updated etc. and who knows what else
(QMP clients shouldn't rely on the current way, but do they?).

I'd rather not make such a change as a side effect of a patch that tries
to do something quite different.
Post by Max Reitz
bdrv_dirty_iter_free(dbi);
}
[...]
diff --git a/block/mirror.c b/block/mirror.c
index 99da9c0858..77ee9b1791 100644
--- a/block/mirror.c
+++ b/block/mirror.c
[...]
@@ -792,11 +792,10 @@ static void coroutine_fn mirror_run(void *opaque)
block_job_pause_point(&s->common);
cnt = bdrv_get_dirty_count(s->dirty_bitmap);
- /* s->common.offset contains the number of bytes already processed so
- * far, cnt is the number of dirty bytes remaining and
- * s->bytes_in_flight is the number of bytes currently being
- * processed; together those are the current total operation length */
- s->common.len = s->common.offset + s->bytes_in_flight + cnt;
+ /* cnt is the number of dirty bytes remaining and s->bytes_in_flight is
+ * the number of bytes currently being processed; together those are
+ * the current total operation length */
No, together, those are the current remaining operation length.
Thanks, will fix.

Kevin
Max Reitz
2018-05-11 21:55:44 UTC
Permalink
Every job gets a non-NULL job->txn on creation, but it doesn't
necessarily keep it until it is decommissioned: Finalising a job removes
it from its transaction. Therefore, calling 'blockdev-job-finalize' a
second time on an already concluded job causes an assertion failure.
Remove job->txn from the assertion in block_job_finalize() to fix this.
block_job_do_finalize() still has the same assertion, but if a job is
already removed from its transaction, block_job_apply_verb() will
already error out before we run into that assertion.
---
blockjob.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-14 13:59:33 UTC
Permalink
This moves the job list from BlockJob to Job. Now we can check for
duplicate IDs in job_create().
---
include/block/blockjob.h | 3 ---
include/qemu/job.h | 19 +++++++++++++++++++
blockjob.c | 47 +++++++++++++++++++++++++----------------------
job.c | 31 +++++++++++++++++++++++++++++++
4 files changed, 75 insertions(+), 25 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-14 14:20:52 UTC
Permalink
This moves BlockJob.status and the closely related functions
(block_)job_state_transition() and (block_)job_apply_verb to Job. The
two QAPI enums are renamed to JobStatus and JobVerb.
---
qapi/block-core.json | 14 +++----
include/block/blockjob.h | 3 --
include/qemu/job.h | 13 ++++++
blockjob.c | 102 +++++++++++------------------------------------
job.c | 56 ++++++++++++++++++++++++++
tests/test-blockjob.c | 39 +++++++++---------
block/trace-events | 2 -
trace-events | 4 ++
8 files changed, 122 insertions(+), 111 deletions(-)
block-job-dismiss's documentation still mentions
BLOCK_JOB_STATUS_CONCLUDED, but that's cleaned up in patch 39, so I
don't mind giving a:

Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-14 14:29:16 UTC
Permalink
This moves reference counting from BlockJob to Job.
In order to keep calling the BlockJob cleanup code when the job is
deleted via job_unref(), introduce a new JobDriver.free callback. Every
block job must use block_job_free() for this callback, this is asserted
in block_job_create().
---
include/block/blockjob.h | 21 -------------------
include/block/blockjob_int.h | 7 +++++++
include/qemu/job.h | 19 ++++++++++++++++--
block/backup.c | 1 +
block/commit.c | 1 +
block/mirror.c | 2 ++
block/stream.c | 1 +
blockjob.c | 48 +++++++++++++++++++-------------------------
job.c | 22 ++++++++++++++++----
qemu-img.c | 4 ++--
tests/test-bdrv-drain.c | 1 +
tests/test-blockjob-txn.c | 1 +
tests/test-blockjob.c | 6 ++++--
13 files changed, 76 insertions(+), 58 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-14 14:39:53 UTC
Permalink
We cannot yet move the whole logic around job cancelling to Job because
it depends on quite a few other things that are still only in BlockJob,
but we can move the cancelled field at least.
---
include/block/blockjob.h | 8 --------
include/block/blockjob_int.h | 8 --------
include/qemu/job.h | 11 +++++++++++
block/backup.c | 6 +++---
block/commit.c | 4 ++--
block/mirror.c | 20 ++++++++++----------
block/stream.c | 4 ++--
blockjob.c | 28 +++++++++++++---------------
job.c | 5 +++++
tests/test-blockjob-txn.c | 6 +++---
tests/test-blockjob.c | 2 +-
11 files changed, 50 insertions(+), 52 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-14 15:20:50 UTC
Permalink
When block jobs need an AioContext, they just take it from their main
block node. Generic jobs don't have a main block node, so we need to
assign them an AioContext explicitly.
---
include/qemu/job.h | 7 ++++++-
blockjob.c | 5 ++++-
job.c | 4 +++-
3 files changed, 13 insertions(+), 3 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-14 15:52:34 UTC
Permalink
---
include/block/blockjob.h | 5 ----
include/block/blockjob_int.h | 19 ---------------
include/qemu/job.h | 20 ++++++++++++++++
block/backup.c | 7 +++---
block/commit.c | 11 +++++----
block/mirror.c | 15 ++++++------
block/stream.c | 14 +++++------
blockjob.c | 57 ++++----------------------------------------
job.c | 33 +++++++++++++++++++++++++
tests/test-bdrv-drain.c | 7 +++---
tests/test-blockjob-txn.c | 13 +++++-----
tests/test-blockjob.c | 7 +++---
12 files changed, 98 insertions(+), 110 deletions(-)
[...]
diff --git a/block/commit.c b/block/commit.c
index 85baea8f92..d326766e4d 100644
--- a/block/commit.c
+++ b/block/commit.c
@@ -72,9 +72,10 @@ typedef struct {
int ret;
} CommitCompleteData;
-static void commit_complete(BlockJob *job, void *opaque)
+static void commit_complete(Job *job, void *opaque)
{
- CommitBlockJob *s = container_of(job, CommitBlockJob, common);
+ CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
Now this is just abuse.

...but it's not the first time someone packs two container_of() into
one, it appears. So, whatever, I guess.
+ BlockJob *bjob = &s->common;
CommitCompleteData *data = opaque;
BlockDriverState *top = blk_bs(s->top);
BlockDriverState *base = blk_bs(s->base);
[...]
diff --git a/blockjob.c b/blockjob.c
index d44f5c2e50..6021d885be 100644
--- a/blockjob.c
+++ b/blockjob.c
[...]
@@ -1159,50 +1159,3 @@ BlockErrorAction block_job_error_action(BlockJob *job, BlockdevOnError on_err,
}
return action;
}
-
-typedef struct {
- BlockJob *job;
- AioContext *aio_context;
- BlockJobDeferToMainLoopFn *fn;
- void *opaque;
-} BlockJobDeferToMainLoopData;
-
-static void block_job_defer_to_main_loop_bh(void *opaque)
-{
- BlockJobDeferToMainLoopData *data = opaque;
- AioContext *aio_context;
-
- /* Prevent race with block_job_defer_to_main_loop() */
- aio_context_acquire(data->aio_context);
-
- /* Fetch BDS AioContext again, in case it has changed */
- aio_context = blk_get_aio_context(data->job->blk);
- if (aio_context != data->aio_context) {
- aio_context_acquire(aio_context);
- }
-
- data->fn(data->job, data->opaque);
-
- if (aio_context != data->aio_context) {
- aio_context_release(aio_context);
- }
-
- aio_context_release(data->aio_context);
-
- g_free(data);
-}
-
-void block_job_defer_to_main_loop(BlockJob *job,
- BlockJobDeferToMainLoopFn *fn,
- void *opaque)
-{
- BlockJobDeferToMainLoopData *data = g_malloc(sizeof(*data));
- data->job = job;
- data->aio_context = blk_get_aio_context(job->blk);
- data->fn = fn;
- data->opaque = opaque;
- job->deferred_to_main_loop = true;
-
- aio_bh_schedule_oneshot(qemu_get_aio_context(),
- block_job_defer_to_main_loop_bh, data);
-}
diff --git a/job.c b/job.c
index 6f97a4317e..b074b3ffd7 100644
--- a/job.c
+++ b/job.c
@@ -28,6 +28,7 @@
#include "qapi/error.h"
#include "qemu/job.h"
#include "qemu/id.h"
+#include "qemu/main-loop.h"
#include "trace-root.h"
static QLIST_HEAD(, Job) jobs = QLIST_HEAD_INITIALIZER(jobs);
@@ -170,3 +171,35 @@ void job_unref(Job *job)
g_free(job);
}
}
+
+typedef struct {
+ Job *job;
+ JobDeferToMainLoopFn *fn;
+ void *opaque;
+} JobDeferToMainLoopData;
+
+static void job_defer_to_main_loop_bh(void *opaque)
+{
+ JobDeferToMainLoopData *data = opaque;
+ Job *job = data->job;
+ AioContext *aio_context = job->aio_context;
+
+ /* Prevent race with job_defer_to_main_loop() */
+ aio_context_acquire(aio_context);
I don't have a good feeling about this. The original code had this
comment above an aio_context_acquire() for a context that might
decidedly not have anything to do with the BB's context;
block_job_defer_to_main_loop()'s description was that it would acquire
the latter, so why did it acquire the former at all?

We wouldn't need this comment here at all, because acquiring this
AioContext is part of the interface. That's why I don't have a good
feeling.

The best explanation I can come up with is that the original code
acquired the AioContext both of the block device at the time of the BH
(because that needs to be done), and at the time of
block_job_defer_to_main_loop() -- because the latter is probably the
context the block_job_defer_to_main_loop() call came from, so it should
be (b)locked.

But if that's the case, then the same should be done here. The context
of the job may change between scheduling the BH and the BH being
executed, so we might lock a different context here than the one
job_defer_to_main_loop() ran in (i.e., job->aio_context at the time of
job_defer_to_main_loop() running). And maybe we should lock that old
context, too -- just like block_job_defer_to_main_loop_bh() did.

Max
+ data->fn(data->job, data->opaque);
+ aio_context_release(aio_context);
+
+ g_free(data);
+}
+
+void job_defer_to_main_loop(Job *job, JobDeferToMainLoopFn *fn, void *opaque)
+{
+ JobDeferToMainLoopData *data = g_malloc(sizeof(*data));
+ data->job = job;
+ data->fn = fn;
+ data->opaque = opaque;
+ job->deferred_to_main_loop = true;
+
+ aio_bh_schedule_oneshot(qemu_get_aio_context(),
+ job_defer_to_main_loop_bh, data);
+}
Kevin Wolf
2018-05-15 12:17:25 UTC
Permalink
Post by Max Reitz
---
include/block/blockjob.h | 5 ----
include/block/blockjob_int.h | 19 ---------------
include/qemu/job.h | 20 ++++++++++++++++
block/backup.c | 7 +++---
block/commit.c | 11 +++++----
block/mirror.c | 15 ++++++------
block/stream.c | 14 +++++------
blockjob.c | 57 ++++----------------------------------------
job.c | 33 +++++++++++++++++++++++++
tests/test-bdrv-drain.c | 7 +++---
tests/test-blockjob-txn.c | 13 +++++-----
tests/test-blockjob.c | 7 +++---
12 files changed, 98 insertions(+), 110 deletions(-)
[...]
diff --git a/block/commit.c b/block/commit.c
index 85baea8f92..d326766e4d 100644
--- a/block/commit.c
+++ b/block/commit.c
@@ -72,9 +72,10 @@ typedef struct {
int ret;
} CommitCompleteData;
-static void commit_complete(BlockJob *job, void *opaque)
+static void commit_complete(Job *job, void *opaque)
{
- CommitBlockJob *s = container_of(job, CommitBlockJob, common);
+ CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
Now this is just abuse.
...but it's not the first time someone packs two container_of() into
one, it appears. So, whatever, I guess.
I don't think it's abuse. Why wouldn't I directly cast to the type that
I really want instead of casting to each of the uninteresting parent
classes, too?
Post by Max Reitz
@@ -170,3 +171,35 @@ void job_unref(Job *job)
g_free(job);
}
}
+
+typedef struct {
+ Job *job;
+ JobDeferToMainLoopFn *fn;
+ void *opaque;
+} JobDeferToMainLoopData;
+
+static void job_defer_to_main_loop_bh(void *opaque)
+{
+ JobDeferToMainLoopData *data = opaque;
+ Job *job = data->job;
+ AioContext *aio_context = job->aio_context;
+
+ /* Prevent race with job_defer_to_main_loop() */
+ aio_context_acquire(aio_context);
I don't have a good feeling about this. The original code had this
comment above an aio_context_acquire() for a context that might
decidedly not have anything to do with the BB's context;
block_job_defer_to_main_loop()'s description was that it would acquire
the latter, so why did it acquire the former at all?
We wouldn't need this comment here at all, because acquiring this
AioContext is part of the interface. That's why I don't have a good
feeling.
To be honest, I don't fully understand what the old code was trying to
do or what race it was talking about, because I don't see any potential
race with job_defer_to_main_loop() (neither in the old nor in the new
code).

My suspicion is that Stefan solved the race that you reported [1] (and
which was not with job_defer_to_main_loop(), but with random code that
runs between that and the BH) just by adding some more code that made
the scenario safe, but missed that this also made the existing code
redundant. In other words, I think taking data->aio_context didn't serve
a purpose even in the old code.

The only thing it could possibly protect is the access of data->job->bs,
but that's not something that changes between job_defer_to_main_loop()
and the execution of the BH.

[1] https://lists.gnu.org/archive/html/qemu-devel/2014-10/msg00260.html
Post by Max Reitz
The best explanation I can come up with is that the original code
acquired the AioContext both of the block device at the time of the BH
(because that needs to be done), and at the time of
block_job_defer_to_main_loop() -- because the latter is probably the
context the block_job_defer_to_main_loop() call came from, so it should
be (b)locked.
But if that's the case, then the same should be done here. The context
of the job may change between scheduling the BH and the BH being
executed, so we might lock a different context here than the one
job_defer_to_main_loop() ran in (i.e., job->aio_context at the time of
job_defer_to_main_loop() running). And maybe we should lock that old
context, too -- just like block_job_defer_to_main_loop_bh() did.
Why should we lock the old context? We don't access anything protected
by it. Even the data->job->bs access has gone away because we now have
job->aio_context.

Kevin
Max Reitz
2018-05-16 10:51:09 UTC
Permalink
Post by Kevin Wolf
Post by Max Reitz
---
include/block/blockjob.h | 5 ----
include/block/blockjob_int.h | 19 ---------------
include/qemu/job.h | 20 ++++++++++++++++
block/backup.c | 7 +++---
block/commit.c | 11 +++++----
block/mirror.c | 15 ++++++------
block/stream.c | 14 +++++------
blockjob.c | 57 ++++----------------------------------------
job.c | 33 +++++++++++++++++++++++++
tests/test-bdrv-drain.c | 7 +++---
tests/test-blockjob-txn.c | 13 +++++-----
tests/test-blockjob.c | 7 +++---
12 files changed, 98 insertions(+), 110 deletions(-)
[...]
diff --git a/block/commit.c b/block/commit.c
index 85baea8f92..d326766e4d 100644
--- a/block/commit.c
+++ b/block/commit.c
@@ -72,9 +72,10 @@ typedef struct {
int ret;
} CommitCompleteData;
-static void commit_complete(BlockJob *job, void *opaque)
+static void commit_complete(Job *job, void *opaque)
{
- CommitBlockJob *s = container_of(job, CommitBlockJob, common);
+ CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
Now this is just abuse.
...but it's not the first time someone packs two container_of() into
one, it appears. So, whatever, I guess.
I don't think it's abuse. Why wouldn't I directly cast to the type that
I really want instead of casting to each of the uninteresting parent
classes, too?
Because the final parameter is called "member" and not "path". :-)
Post by Kevin Wolf
Post by Max Reitz
@@ -170,3 +171,35 @@ void job_unref(Job *job)
g_free(job);
}
}
+
+typedef struct {
+ Job *job;
+ JobDeferToMainLoopFn *fn;
+ void *opaque;
+} JobDeferToMainLoopData;
+
+static void job_defer_to_main_loop_bh(void *opaque)
+{
+ JobDeferToMainLoopData *data = opaque;
+ Job *job = data->job;
+ AioContext *aio_context = job->aio_context;
+
+ /* Prevent race with job_defer_to_main_loop() */
+ aio_context_acquire(aio_context);
I don't have a good feeling about this. The original code had this
comment above an aio_context_acquire() for a context that might
decidedly not have anything to do with the BB's context;
block_job_defer_to_main_loop()'s description was that it would acquire
the latter, so why did it acquire the former at all?
We wouldn't need this comment here at all, because acquiring this
AioContext is part of the interface. That's why I don't have a good
feeling.
To be honest, I don't fully understand what the old code was trying to
do or what race it was talking about, because I don't see any potential
race with job_defer_to_main_loop() (neither in the old nor in the new
code).
My suspicion is that Stefan solved the race that you reported [1] (and
which was not with job_defer_to_main_loop(), but with random code that
runs between that and the BH) just by adding some more code that made
the scenario safe, but missed that this also made the existing code
redundant. In other words, I think taking data->aio_context didn't serve
a purpose even in the old code.
Possible, yes.

Also seems more likely than any of the explanations I tried to come up with.
Post by Kevin Wolf
The only thing it could possibly protect is the access of data->job->bs,
but that's not something that changes between job_defer_to_main_loop()
and the execution of the BH.
[1] https://lists.gnu.org/archive/html/qemu-devel/2014-10/msg00260.html
Post by Max Reitz
The best explanation I can come up with is that the original code
acquired the AioContext both of the block device at the time of the BH
(because that needs to be done), and at the time of
block_job_defer_to_main_loop() -- because the latter is probably the
context the block_job_defer_to_main_loop() call came from, so it should
be (b)locked.
But if that's the case, then the same should be done here. The context
of the job may change between scheduling the BH and the BH being
executed, so we might lock a different context here than the one
job_defer_to_main_loop() ran in (i.e., job->aio_context at the time of
job_defer_to_main_loop() running). And maybe we should lock that old
context, too -- just like block_job_defer_to_main_loop_bh() did.
Why should we lock the old context? We don't access anything protected
by it. Even the data->job->bs access has gone away because we now have
job->aio_context.
Because the old code did so and I don't know why. O:-)

Your explanation does make sense to me, though, so:

Reviewed-by: Max Reitz <***@redhat.com>
Eric Blake
2018-05-16 18:32:04 UTC
Permalink
Post by Max Reitz
Post by Kevin Wolf
Post by Max Reitz
-static void commit_complete(BlockJob *job, void *opaque)
+static void commit_complete(Job *job, void *opaque)
{
- CommitBlockJob *s = container_of(job, CommitBlockJob, common);
+ CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
Now this is just abuse.
...but it's not the first time someone packs two container_of() into
one, it appears. So, whatever, I guess.
I don't think it's abuse. Why wouldn't I directly cast to the type that
I really want instead of casting to each of the uninteresting parent
classes, too?
Because the final parameter is called "member" and not "path". :-)
container_of() is using offsetof(); and in C99 7.17, the parameter is
named "member-designator" which can indeed jump through multiple layers
(any valid address constant, as defined in 6.6P9). I don't see this as
abuse of the interface.
Post by Max Reitz
Post by Kevin Wolf
Post by Max Reitz
The best explanation I can come up with is that the original code
acquired the AioContext both of the block device at the time of the BH
(because that needs to be done), and at the time of
block_job_defer_to_main_loop() -- because the latter is probably the
context the block_job_defer_to_main_loop() call came from, so it should
be (b)locked.
But if that's the case, then the same should be done here. The context
of the job may change between scheduling the BH and the BH being
executed, so we might lock a different context here than the one
job_defer_to_main_loop() ran in (i.e., job->aio_context at the time of
job_defer_to_main_loop() running). And maybe we should lock that old
context, too -- just like block_job_defer_to_main_loop_bh() did.
Why should we lock the old context? We don't access anything protected
by it. Even the data->job->bs access has gone away because we now have
job->aio_context.
Because the old code did so and I don't know why. O:-)
Then it's best to include that explanation in the commit message itself,
to save the future reader the hassle of digging up this thread.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
Max Reitz
2018-05-14 16:47:49 UTC
Permalink
This commit moves some core functions for dealing with the job coroutine
from BlockJob to Job. This includes primarily entering the coroutine
(both for the first and reentering) and yielding explicitly and at pause
points.
---
include/block/blockjob.h | 40 ---------
include/block/blockjob_int.h | 26 ------
include/qemu/job.h | 76 ++++++++++++++++
block/backup.c | 2 +-
block/commit.c | 4 +-
block/mirror.c | 22 ++---
block/replication.c | 2 +-
block/stream.c | 4 +-
blockdev.c | 8 +-
blockjob.c | 201 +++++++------------------------------------
job.c | 137 +++++++++++++++++++++++++++++
tests/test-bdrv-drain.c | 38 ++++----
tests/test-blockjob-txn.c | 12 +--
tests/test-blockjob.c | 14 +--
14 files changed, 296 insertions(+), 290 deletions(-)
[...]
diff --git a/job.c b/job.c
index b074b3ffd7..b4cece1a82 100644
--- a/job.c
+++ b/job.c
[...]
@@ -172,6 +205,110 @@ void job_unref(Job *job)
}
}
+void job_enter_cond(Job *job, bool(*fn)(Job *job))
+{
+ if (!job_started(job)) {
+ return;
+ }
+ if (job->deferred_to_main_loop) {
+ return;
+ }
+
+ job_lock();
+ if (job->busy) {
+ job_unlock();
+ return;
+ }
+
+ if (fn && !fn(job)) {
+ job_unlock();
+ return;
+ }
+
+ assert(!job->deferred_to_main_loop);
+ timer_del(&job->sleep_timer);
+ job->busy = true;
+ job_unlock();
+ aio_co_wake(job->co);
+}
+
+ * Reentering the job coroutine with block_job_enter() before the timer has
There is no job_enter() yet, but you do reference it below, so this
should probably be adjusted here...
+ * expired is allowed and cancels the timer.
+ *
...and here. Or maybe you want to change the job_enter below to a
block_job_enter until there is a job_enter().

With that done (or not, because I don't really care about mid-series
comments);
+ * called explicitly. */
+void coroutine_fn job_do_yield(Job *job, uint64_t ns)
+{
+ job_lock();
+ if (ns != -1) {
+ timer_mod(&job->sleep_timer, ns);
+ }
+ job->busy = false;
+ job_unlock();
+ qemu_coroutine_yield();
+
+ /* Set by job_enter before re-entering the coroutine. */
+ assert(job->busy);
+}
Max Reitz
2018-05-14 16:57:34 UTC
Permalink
There is nothing block layer specific about block_job_sleep_ns(), so
move the function to Job.
---
include/block/blockjob_int.h | 11 -----------
include/qemu/job.h | 17 +++++++++++++++++
block/backup.c | 2 +-
block/commit.c | 2 +-
block/mirror.c | 4 ++--
block/stream.c | 2 +-
blockjob.c | 27 ---------------------------
job.c | 32 ++++++++++++++++++++++++++++++++
tests/test-bdrv-drain.c | 8 ++++----
tests/test-blockjob-txn.c | 2 +-
tests/test-blockjob.c | 2 +-
11 files changed, 60 insertions(+), 49 deletions(-)
The Job.sleep_timer documentation should be adjusted. With that done:

Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-14 17:13:14 UTC
Permalink
While we already moved the state related to job pausing to Job, the
functions to do were still BlockJob only. This commit moves them over to
Job.
---
include/block/blockjob.h | 32 -------------------
include/block/blockjob_int.h | 7 +++++
include/qemu/job.h | 37 ++++++++++++++++++++++
block/backup.c | 1 +
block/commit.c | 1 +
block/mirror.c | 2 ++
block/stream.c | 1 +
blockdev.c | 6 ++--
blockjob.c | 73 ++++++++++----------------------------------
job.c | 51 +++++++++++++++++++++++++++++++
tests/test-bdrv-drain.c | 1 +
tests/test-blockjob-txn.c | 1 +
tests/test-blockjob.c | 6 ++--
13 files changed, 125 insertions(+), 94 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-14 17:33:17 UTC
Permalink
Since we introduced an explicit status to block job, BlockJob.completed
is redundant because it can be derived from the status. Remove the field
from BlockJob and add a function to derive it from the status at the Job
level.
---
include/block/blockjob.h | 3 ---
include/qemu/job.h | 3 +++
blockjob.c | 16 +++++++---------
job.c | 22 ++++++++++++++++++++++
qemu-img.c | 4 ++--
5 files changed, 34 insertions(+), 14 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-14 17:50:44 UTC
Permalink
This renames the BlockJobCreateFlags constants, moves a few JOB_INTERNAL
checks to job_create() and the auto_{finalize,dismiss} fields from
BlockJob to Job.
---
include/block/blockjob.h | 17 -----------------
include/block/blockjob_int.h | 3 +--
include/qemu/job.h | 20 +++++++++++++++++++-
block/commit.c | 2 +-
block/mirror.c | 2 +-
block/replication.c | 4 ++--
block/stream.c | 2 +-
blockdev.c | 14 +++++++-------
blockjob.c | 27 +++++++--------------------
job.c | 11 ++++++++++-
qemu-img.c | 2 +-
tests/test-blockjob-txn.c | 2 +-
tests/test-blockjob.c | 4 ++--
13 files changed, 53 insertions(+), 57 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-14 17:56:30 UTC
Permalink
block_job_event_pending() doesn't only send a QMP event, but it also
transitions to the PENDING state. Split the function so that we get one
part only sending the event (like other block_job_event_* functions) and
another part than does the state transition.
---
blockjob.c | 27 ++++++++++++++++++---------
1 file changed, 18 insertions(+), 9 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-14 18:09:48 UTC
Permalink
Go through the Job layer in order to send QMP events. For the moment,
these functions only call a notifier in the BlockJob layer that sends
the existing commands.
This uses notifiers rather than JobDriver callbacks because internal
users of jobs won't receive QMP events, but might still be interested
in getting notified for the events.
---
include/block/blockjob.h | 9 +++++++++
include/qemu/job.h | 18 ++++++++++++++++++
blockjob.c | 42 ++++++++++++++++++++++++++++--------------
job.c | 19 +++++++++++++++++++
4 files changed, 74 insertions(+), 14 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-14 18:46:29 UTC
Permalink
This moves the finalisation of a single job from BlockJob to Job.
Some part of this code depends on job transactions, and job transactions
call this code, we introduce some temporary calls from Job functions to
BlockJob ones. This will be fixed once transactions move to Job, too.
---
include/block/blockjob.h | 9 ---
include/block/blockjob_int.h | 36 -----------
include/qemu/job.h | 53 +++++++++++++++-
block/backup.c | 22 +++----
block/commit.c | 2 +-
block/mirror.c | 2 +-
blockjob.c | 142 ++++++++-----------------------------------
job.c | 100 +++++++++++++++++++++++++++++-
qemu-img.c | 2 +-
tests/test-blockjob.c | 10 +--
10 files changed, 194 insertions(+), 184 deletions(-)
[...]
diff --git a/include/qemu/job.h b/include/qemu/job.h
index 1b4397f9a1..12edb822d8 100644
--- a/include/qemu/job.h
+++ b/include/qemu/job.h
@@ -29,6 +29,7 @@
#include "qapi/qapi-types-block-core.h"
#include "qemu/queue.h"
#include "qemu/coroutine.h"
+#include "block/aio.h"
typedef struct JobDriver JobDriver;
@@ -105,6 +106,15 @@ typedef struct Job {
/** True if this job should automatically dismiss itself */
bool auto_dismiss;
+ /** ret code passed to block_job_completed. */
+ int ret;
While there is no such function yet, you might not want to mention a
block_job_* function here anyway.

Apart from that, as far as I can see, this comment remains unchanged
even later in this series when block_job_completed() is removed.

So something needs to be changed somewhere, so here is an anticipating

Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-14 19:00:20 UTC
Permalink
block_job_cancel_async() did two things that were still block job
* Setting job->force. This field makes sense on the Job level, so we can
just move it. While at it, rename it to job->force_cancel to make its
purpose more obvious.
* Resetting the I/O status. This can't be moved because generic Jobs
don't have an I/O status. What the function really implements is a
user resume, except without entering the coroutine. Consequently, it
makes sense to call the .user_resume driver callback here which
already resets the I/O status.
The old block_job_cancel_async() has two separate if statements that
check job->iostatus != BLOCK_DEVICE_IO_STATUS_OK and job->user_paused.
However, the former condition always implies the latter (as is
asserted in block_job_iostatus_reset()), so changing the explicit call
of block_job_iostatus_reset() on the former condition with the
.user_resume callback on the latter condition is equivalent and
doesn't need to access any BlockJob specific state.
---
include/block/blockjob.h | 6 ------
include/qemu/job.h | 6 ++++++
block/mirror.c | 4 ++--
blockjob.c | 25 +++++++++++++------------
4 files changed, 21 insertions(+), 20 deletions(-)
I'm not quite sure why you keep this function in blockjob.c, when you've
previously moved such static functions over to job.c and made them
temporarily public (e.g. job_state_transition()).

But I don't really care either way, in fact keeping the function in the
same file makes reviewing easier for me, so:

Reviewed-by: Max Reitz <***@redhat.com>
John Snow
2018-05-14 19:14:09 UTC
Permalink
Block job drivers are not expected to mess with the internals of the
BlockJob object, so provide wrapper functions for one of the cases where
they still do it: Updating the progress counter.
The backup code takes a hot second to read, but it seems correct. Having
both a common len and a backup len is a bit awful, but only in this
patch diff where you have to deal with both.

Reviewed-by: John Snow <***@redhat.com>
John Snow
2018-05-14 19:19:49 UTC
Permalink
Every block job has a RateLimit, and they all do the exact same thing
with it, so it should be common infrastructure. Move the struct field
for a start.
Reviewed-by: John Snow <***@redhat.com>
Max Reitz
2018-05-14 19:26:52 UTC
Permalink
block_job_drain() contains a blk_drain() call which cannot be moved to
Job, so add a new JobDriver callback JobDriver.drain which has a common
implementation for all BlockJobs. In addition to this we keep the
existing BlockJobDriver.drain callback that is called by the common
drain implementation for all block jobs.
---
include/block/blockjob_int.h | 12 ++++++++++++
include/qemu/job.h | 13 +++++++++++++
block/backup.c | 1 +
block/commit.c | 1 +
block/mirror.c | 2 ++
block/stream.c | 1 +
blockjob.c | 20 ++++++++++----------
job.c | 11 +++++++++++
tests/test-bdrv-drain.c | 1 +
tests/test-blockjob-txn.c | 1 +
tests/test-blockjob.c | 2 ++
11 files changed, 55 insertions(+), 10 deletions(-)
diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h
index bf2b762808..38fe22d7e0 100644
--- a/include/block/blockjob_int.h
+++ b/include/block/blockjob_int.h
@@ -65,6 +65,10 @@ struct BlockJobDriver {
* If the callback is not NULL, it will be invoked when the job has to be
* synchronously cancelled or completed; it should drain BlockDriverStates
* as required to ensure progress.
+ *
+ * Block jobs must use the default implementation for job_driver.drain,
+ * which will in turn call this callback after doing generic block job
+ * stuff.
*/
void (*drain)(BlockJob *job);
I don't really see the point of having two drain callbacks for block
jobs. Well, it allows an assert() that block_job_drain() is called at
some point, but still. I'd like block jobs to be not very special, but
this makes them a bit more special than they need to be.

Maybe I'd like it a bit more if there was a macro to automatically set
these mandatory values for block jobs...
};
John Snow
2018-05-14 19:27:09 UTC
Permalink
All block job drivers support .set_speed and all of them duplicate the
same code to implement it. Move that code to blockjob.c and remove the
now useless callback.
🎊
Reviewed-by: John Snow <***@redhat.com>
John Snow
2018-05-14 19:36:23 UTC
Permalink
This gets us rid of more direct accesses to BlockJob fields from the
job drivers.
---
include/block/blockjob_int.h | 8 ++++++++
block/backup.c | 18 +++++++-----------
block/commit.c | 4 ++--
block/mirror.c | 5 +----
block/stream.c | 4 ++--
blockjob.c | 9 +++++++++
6 files changed, 29 insertions(+), 19 deletions(-)
[...]
diff --git a/block/backup.c b/block/backup.c
index 8468fd9f94..cfdb6ecdf5 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -325,21 +325,17 @@ static void backup_complete(BlockJob *job, void *opaque)
static bool coroutine_fn yield_and_check(BackupBlockJob *job)
{
+ uint64_t delay_ns;
+
if (block_job_is_cancelled(&job->common)) {
return true;
}
- /* we need to yield so that bdrv_drain_all() returns.
- * (without, VM does not reboot)
- */
I'll kind of miss this terse comment. "without, VM does not reboot" is
very direct and to the point. :)
- if (job->common.speed) {
- uint64_t delay_ns = ratelimit_calculate_delay(&job->common.limit,
- job->bytes_read);
- job->bytes_read = 0;
- block_job_sleep_ns(&job->common, delay_ns);
- } else {
- block_job_sleep_ns(&job->common, 0);
- }
+ /* We need to yield even for delay_ns = 0 so that bdrv_drain_all() can
+ * return. Without a yield, the VM would not reboot. */
+ delay_ns = block_job_ratelimit_get_delay(&job->common, job->bytes_read);
+ job->bytes_read = 0;
+ block_job_sleep_ns(&job->common, delay_ns);
if (block_job_is_cancelled(&job->common)) {
return true;
Max Reitz
2018-05-14 19:37:40 UTC
Permalink
This moves the .complete callback that tells a READY job to complete
from BlockJobDriver to JobDriver. The wrapper function job_complete()
doesn't require anything block job specific any more and can be moved
to Job.
---
include/block/blockjob.h | 10 ----------
include/block/blockjob_int.h | 6 ------
include/qemu/job.h | 8 ++++++++
block/mirror.c | 10 +++++-----
blockdev.c | 2 +-
blockjob.c | 23 +++++------------------
job.c | 16 ++++++++++++++++
tests/test-bdrv-drain.c | 6 +++---
tests/test-blockjob.c | 10 +++++-----
9 files changed, 43 insertions(+), 48 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
John Snow
2018-05-14 19:43:09 UTC
Permalink
The backup block job directly accesses the driver field in BlockJob. Add
a wrapper for getting it.
I'm not too far into this series yet but I suppose it turns out that
it's more useful to get the driver than to ask the job system what the
job type is.

Probably there's some QAPI reasons for why that's true, too...

Reviewed-by: John Snow <***@redhat.com>.
Max Reitz
2018-05-14 19:49:27 UTC
Permalink
block_job_finish_sync() doesn't contain anything block job specific any
more, so it can be moved to Job.
---
include/qemu/job.h | 9 +++++++++
blockjob.c | 55 +++++++++---------------------------------------------
job.c | 28 +++++++++++++++++++++++++++
3 files changed, 46 insertions(+), 46 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
John Snow
2018-05-14 19:57:36 UTC
Permalink
This is the first step towards creating an infrastructure for generic
background jobs that aren't tied to a block device. For now, Job only
stores its ID and JobDriver, the rest stays in BlockJob.
The following patches will move over more parts of BlockJob to Job if
they are meaningful outside the context of a block job.
I think I'll be reviewing this series in a manner that trusts that all
of the obvious cleanups get handled later, and only yelp if something
looks distinctly wrong.

Anything that simply doesn't get cleaned up is something we can fix
later, so unless it looks like it's painting us into a corner, it
probably doesn't need to be addressed right away.

Reviewed-by: John Snow <***@redhat.com>
Max Reitz
2018-05-14 20:00:03 UTC
Permalink
This doesn't actually move any transaction code to Job yet, but it
renames the type for transactions from BlockJobTxn to JobTxn and makes
them contain Jobs rather than BlockJobs
---
include/block/block_int.h | 2 +-
include/block/blockjob.h | 11 ++++----
include/block/blockjob_int.h | 2 +-
include/qemu/job.h | 3 +++
block/backup.c | 2 +-
blockdev.c | 14 +++++------
blockjob.c | 60 +++++++++++++++++++++++---------------------
tests/test-blockjob-txn.c | 8 +++---
8 files changed, 54 insertions(+), 48 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
John Snow
2018-05-14 20:05:47 UTC
Permalink
QAPI types aren't externally visible, so we can rename them without
causing problems. Before we add a job type to Job, rename the enum
so it can be used for more than just block jobs.
So we will be registering all jobs in a central place.
Seems fine.

Reviewed-by: John Snow <***@redhat.com>
John Snow
2018-05-14 20:15:14 UTC
Permalink
This moves freeing the Job object and its fields from block_job_unref()
to job_delete().
Reviewed-by: John Snow <***@redhat.com>

I guess the reference counting comes later.

*peeks*

Oh, soon!
Max Reitz
2018-05-14 20:28:50 UTC
Permalink
This moves the logic that implements job transactions from BlockJob to
Job.
---
include/block/blockjob.h | 54 ----------
include/block/blockjob_int.h | 10 --
include/qemu/job.h | 71 +++++++++++--
blockdev.c | 6 +-
blockjob.c | 238 +------------------------------------------
job.c | 235 ++++++++++++++++++++++++++++++++++++++++--
tests/test-blockjob-txn.c | 12 +--
tests/test-blockjob.c | 2 +-
8 files changed, 304 insertions(+), 324 deletions(-)
[...]
diff --git a/include/qemu/job.h b/include/qemu/job.h
index 614a2dea92..84a9eb7980 100644
--- a/include/qemu/job.h
+++ b/include/qemu/job.h
[...]
@@ -227,20 +242,52 @@ typedef enum JobCreateFlags {
[...]
+/**
+ *
+ * The caller must call either block_job_txn_unref() or block_job_completed()
*job_txn_unref()

(and maybe even job_completed() in preparation for the next patches)
+ * to release the reference that is automatically grabbed here.
+ *
+ */
+void job_txn_add_job(JobTxn *txn, Job *job);
[...]
diff --git a/job.c b/job.c
index 49dce57c9e..2d782859ac 100644
--- a/job.c
+++ b/job.c
[...]
@@ -80,6 +93,71 @@ static void __attribute__((__constructor__)) job_init(void)
[...]
+static int job_txn_apply(JobTxn *txn, int fn(Job *), bool lock)
(“6.7.6.3. (8) A declaration of a parameter as ‘function returning type’
shall be adjusted to ‘pointer to function returning type’, as in 6.3.2.1.”

Interesting. Didn't know that worked.)

[...]
@@ -542,12 +632,141 @@ int job_finalize_single(Job *job)
[...]
+static int job_prepare(Job *job)
+{
+ if (job->ret == 0 && job->driver->prepare) {
+ job->ret = job->driver->prepare(job);
+ }
+ return job->ret;
+}
I'd have put this above job_commit() and the like, but it's not like it
matters functionally...

Well, you know me. With the comment fixed:

Reviewed-by: Max Reitz <***@redhat.com>
John Snow
2018-05-14 20:42:46 UTC
Permalink
This moves the job list from BlockJob to Job. Now we can check for
duplicate IDs in job_create().
Reviewed-by: John Snow <***@redhat.com>
Max Reitz
2018-05-14 20:53:58 UTC
Permalink
This moves the top-level job completion and cancellation functions from
BlockJob to Job.
---
include/block/blockjob.h | 55 -------------------------------
include/block/blockjob_int.h | 18 ----------
include/qemu/job.h | 58 ++++++++++++++++++++++++++++----
block.c | 2 +-
block/backup.c | 3 +-
block/commit.c | 8 ++---
block/mirror.c | 6 ++--
block/replication.c | 4 +--
block/stream.c | 2 +-
blockdev.c | 8 ++---
blockjob.c | 76 ------------------------------------------
job.c | 78 +++++++++++++++++++++++++++++++++++++++++---
qemu-img.c | 2 +-
tests/test-bdrv-drain.c | 5 ++-
tests/test-blockjob-txn.c | 14 ++++----
tests/test-blockjob.c | 21 ++++++------
block/trace-events | 3 --
trace-events | 1 +
18 files changed, 162 insertions(+), 202 deletions(-)
After this patch, there are a couple of places left that mention
block_job_enter(), those should be fixed. Well, and those two
block_job_completed() mentions.

[...]
diff --git a/block.c b/block.c
index 676e57f562..7a149bfea9 100644
--- a/block.c
+++ b/block.c
@@ -3362,7 +3362,7 @@ static void bdrv_close(BlockDriverState *bs)
void bdrv_close_all(void)
{
- block_job_cancel_sync_all();
+ job_cancel_sync_all();
Do we really want to cancel jobs that might have nothing to do with the
block layer?
nbd_export_close_all();
/* Drop references from requests still in flight, such as canceled block
[...]
diff --git a/block/commit.c b/block/commit.c
index 02a8af9127..56c3810bad 100644
--- a/block/commit.c
+++ b/block/commit.c
@@ -112,12 +112,12 @@ static void commit_complete(Job *job, void *opaque)
blk_unref(s->top);
/* If there is more than one reference to the job (e.g. if called from
- * block_job_finish_sync()), block_job_completed() won't free it and
- * therefore the blockers on the intermediate nodes remain. This would
- * cause bdrv_set_backing_hd() to fail. */
+ * block_job_finish_sync()), job_completed() won't free it and therefore
Nice start, but there is more to do in this line. :-)

(Though probably in some other patch.)

Max
+ * the blockers on the intermediate nodes remain. This would cause
+ * bdrv_set_backing_hd() to fail. */
block_job_remove_all_bdrv(bjob);
- block_job_completed(&s->common, ret);
+ job_completed(job, ret);
g_free(data);
/* If bdrv_drop_intermediate() didn't already do that, remove the commit
Kevin Wolf
2018-05-15 12:59:48 UTC
Permalink
Post by Max Reitz
This moves the top-level job completion and cancellation functions from
BlockJob to Job.
@@ -3362,7 +3362,7 @@ static void bdrv_close(BlockDriverState *bs)
void bdrv_close_all(void)
{
- block_job_cancel_sync_all();
+ job_cancel_sync_all();
Do we really want to cancel jobs that might have nothing to do with the
block layer?
Yes, though I admit it's a bit ugly to do it here. Alternatively, I
could assert here that no jobs are running and pull the
job_cancel_sync_all() into the callers. For vl.c, this is trivial.
qemu-nbd.c uses 'atexit(bdrv_close_all);', so I'd have to introduce a
wrapper function that calls both job_cancel_sync_all() and
bdrv_close_all().

Would you prefer that?

Kevin
Max Reitz
2018-05-16 10:52:17 UTC
Permalink
Post by Kevin Wolf
Post by Max Reitz
This moves the top-level job completion and cancellation functions from
BlockJob to Job.
@@ -3362,7 +3362,7 @@ static void bdrv_close(BlockDriverState *bs)
void bdrv_close_all(void)
{
- block_job_cancel_sync_all();
+ job_cancel_sync_all();
Do we really want to cancel jobs that might have nothing to do with the
block layer?
Yes, though I admit it's a bit ugly to do it here. Alternatively, I
could assert here that no jobs are running and pull the
job_cancel_sync_all() into the callers. For vl.c, this is trivial.
qemu-nbd.c uses 'atexit(bdrv_close_all);', so I'd have to introduce a
wrapper function that calls both job_cancel_sync_all() and
bdrv_close_all().
Would you prefer that?
Yes, I'd like that.

Thanks!

Max
John Snow
2018-05-14 20:58:31 UTC
Permalink
This moves BlockJob.status and the closely related functions
(block_)job_state_transition() and (block_)job_apply_verb to Job. The
two QAPI enums are renamed to JobStatus and JobVerb.
In good faith that the TODOs dissolve by the end of the series:

Reviewed-by: John Snow <***@redhat.com>

(As a bonus, removing 'BLOCK' out of all of those enum names made the
table a little thinner column-wise. nice!)
Max Reitz
2018-05-14 20:59:57 UTC
Permalink
This moves block_job_yield() to the Job layer.
---
include/block/blockjob_int.h | 8 --------
include/qemu/job.h | 9 +++++++--
block/backup.c | 2 +-
block/mirror.c | 2 +-
blockjob.c | 16 ----------------
job.c | 20 ++++++++++++++++++--
tests/test-blockjob-txn.c | 2 +-
7 files changed, 28 insertions(+), 31 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-14 21:06:46 UTC
Permalink
This moves block_job_dismiss() to the Job layer.
---
include/block/blockjob.h | 9 ---------
include/qemu/job.h | 7 ++++++-
blockdev.c | 8 +++++---
blockjob.c | 13 -------------
job.c | 15 ++++++++++++++-
tests/test-blockjob.c | 4 ++--
6 files changed, 27 insertions(+), 29 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-14 21:11:56 UTC
Permalink
Instead of having a 'bool ready' in BlockJob, add a function that
derives its value from the job status.
At the same time, this fixes the behaviour to match what the QAPI
documentation promises for query-block-job: 'true if the job may be
completed'. When the ready flag was introduced in commit ef6dbf1e46e,
the flag never had to be reset to match the description because after
being ready, the jobs would immediately complete and disappear.
Job transactions and manual job finalisation were introduced only later.
With these changes, jobs may stay around even after having completed
(and they are not ready to be completed a second time), however their
patches forgot to reset the ready flag.
---
include/block/blockjob.h | 5 -----
include/qemu/job.h | 3 +++
blockjob.c | 3 +--
job.c | 22 ++++++++++++++++++++++
qemu-img.c | 2 +-
tests/test-blockjob.c | 2 +-
6 files changed, 28 insertions(+), 9 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
Max Reitz
2018-05-14 21:17:00 UTC
Permalink
This moves the logic that implements job transactions from BlockJob to
Job.
---
include/block/blockjob.h | 54 ----------
include/block/blockjob_int.h | 10 --
include/qemu/job.h | 71 +++++++++++--
blockdev.c | 6 +-
blockjob.c | 238 +------------------------------------------
job.c | 235 ++++++++++++++++++++++++++++++++++++++++--
tests/test-blockjob-txn.c | 12 +--
tests/test-blockjob.c | 2 +-
8 files changed, 304 insertions(+), 324 deletions(-)
[...]
diff --git a/job.c b/job.c
index 49dce57c9e..2d782859ac 100644
--- a/job.c
+++ b/job.c
[...]
@@ -263,9 +353,10 @@ void job_event_completed(Job *job)
notifier_list_notify(&job->on_finalize_completed, job);
}
-void job_event_pending(Job *job)
+static int job_event_pending(Job *job)
{
notifier_list_notify(&job->on_pending, job);
+ return 0;
}
void job_enter_cond(Job *job, bool(*fn)(Job *job))
On second thought, I don't know why this hunk is necessary.

Max
Max Reitz
2018-05-14 21:22:52 UTC
Permalink
The transition to the READY state was still performed in the BlockJob
layer, in the same function that sent the BLOCK_JOB_READY QMP event.
This patch brings the state transition to the Job layer and implements
the QMP event using a notifier called from the Job layer, like we
already do for other events related to state transitions.
---
include/block/blockjob.h | 3 +++
include/block/blockjob_int.h | 8 --------
include/qemu/job.h | 9 ++++++---
block/mirror.c | 6 +++---
blockjob.c | 36 +++++++++++++++++++-----------------
job.c | 16 +++++++++++++---
tests/test-bdrv-drain.c | 2 +-
tests/test-blockjob.c | 2 +-
8 files changed, 46 insertions(+), 36 deletions(-)
[...]
diff --git a/include/qemu/job.h b/include/qemu/job.h
index fb81cc7c09..20b48926d9 100644
--- a/include/qemu/job.h
+++ b/include/qemu/job.h
[...]
@@ -522,7 +528,4 @@ void job_defer_to_main_loop(Job *job, JobDeferToMainLoopFn *fn, void *opaque);
*/
int job_finish_sync(Job *job, void (*finish)(Job *, Error **errp), Error **errp);
-/* TODO To be removed from the public interface */
-void job_state_transition(Job *job, JobStatus s1);
-
\o/
#endif
[...]
diff --git a/blockjob.c b/blockjob.c
index 0512b41901..27f3199a20 100644
--- a/blockjob.c
+++ b/blockjob.c
[...]
@@ -387,13 +403,14 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
job->finalize_cancelled_notifier.notify = block_job_event_cancelled;
job->finalize_completed_notifier.notify = block_job_event_completed;
job->pending_notifier.notify = block_job_event_pending;
+ job->ready_notifier.notify = block_job_event_ready;
notifier_list_add(&job->job.on_finalize_cancelled,
&job->finalize_cancelled_notifier);
notifier_list_add(&job->job.on_finalize_completed,
&job->finalize_completed_notifier);
- notifier_list_add(&job->job.on_pending,
- &job->pending_notifier);
+ notifier_list_add(&job->job.on_pending, &job->pending_notifier);
Do you want to move this to the patch that introduced this line?
+ notifier_list_add(&job->job.on_ready, &job->ready_notifier);
error_setg(&job->blocker, "block device is in use by block job: %s",
job_type_str(&job->job));
Max Reitz
2018-05-14 21:33:04 UTC
Permalink
BlockJob has fields .offset and .len, which are actually misnomers today
because they are no longer tied to block device sizes, but just progress
counters. As such they make a lot of sense in generic Jobs.
This patch moves the fields to Job and renames them to .progress_current
and .progress_total to describe their function better.
---
include/block/blockjob.h | 25 -------------------------
include/qemu/job.h | 27 +++++++++++++++++++++++++++
block/backup.c | 8 ++++----
block/commit.c | 4 ++--
block/mirror.c | 4 ++--
block/stream.c | 4 ++--
blockjob.c | 26 ++++++++------------------
job.c | 10 ++++++++++
qemu-img.c | 8 ++++++--
9 files changed, 61 insertions(+), 55 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
John Snow
2018-05-14 21:34:21 UTC
Permalink
This moves reference counting from BlockJob to Job.
In order to keep calling the BlockJob cleanup code when the job is
deleted via job_unref(), introduce a new JobDriver.free callback. Every
block job must use block_job_free() for this callback, this is asserted
in block_job_create().
So far so good, though it does look a little silly that presently every
last job has the exact same free callback.

Also, I forgot to reply to #13 with this, but I suppose that the
approach you're taking -- of a fairly straightforward mechanical
refactor -- means we don't really have the opportunity to change any of
the pretty dumb names or existing peculiarities of design we've evolved.

I had hoped we'd be able to change a few things, but certainly keeping
them as-is makes the whole re-factoring process an awful lot simpler...

...Or maybe I'm getting ahead of myself, there's a lot of series left to go.

Ah, anyway:

Reviewed-by: John Snow <***@redhat.com>
Kevin Wolf
2018-05-15 09:06:47 UTC
Permalink
Post by John Snow
This moves reference counting from BlockJob to Job.
In order to keep calling the BlockJob cleanup code when the job is
deleted via job_unref(), introduce a new JobDriver.free callback. Every
block job must use block_job_free() for this callback, this is asserted
in block_job_create().
So far so good, though it does look a little silly that presently every
last job has the exact same free callback.
Also, I forgot to reply to #13 with this, but I suppose that the
approach you're taking -- of a fairly straightforward mechanical
refactor -- means we don't really have the opportunity to change any of
the pretty dumb names or existing peculiarities of design we've evolved.
I had hoped we'd be able to change a few things, but certainly keeping
them as-is makes the whole re-factoring process an awful lot simpler...
...Or maybe I'm getting ahead of myself, there's a lot of series left to go.
No, I think you're right. My approach was that we'd separate the generic
and the block-specific parts in this series without changing much in the
structure or the names, to keep the conversion as obvious as possible
(the series is long enough already).

However, I intentionally kept the QMP interface minimal where I wasn't
quite sure that we want to keep the old one. If you see anything in the
QMP interface that you'd like to see changed, please do comment.
Anything else is an implementation detail and can still be changed on
top of this series.

Kevin
John Snow
2018-05-14 21:53:16 UTC
Permalink
We cannot yet move the whole logic around job cancelling to Job because
it depends on quite a few other things that are still only in BlockJob,
but we can move the cancelled field at least.
Reviewed-by: John Snow <***@redhat.com>
John Snow
2018-05-14 22:02:19 UTC
Permalink
When block jobs need an AioContext, they just take it from their main
block node. Generic jobs don't have a main block node, so we need to
assign them an AioContext explicitly.
Nothing uses the field yet, but so far so good.

Reviewed-by: John Snow <***@redhat.com>
Max Reitz
2018-05-14 22:11:06 UTC
Permalink
This adds a QMP event that is emitted whenever a job transitions from
one status to another. For the event, a new qapi/job.json schema file is
created which will contain all job-related definitions that aren't tied
to the block layer.
---
qapi/block-core.json | 47 +-----------
qapi/job.json | 65 +++++++++++++++++
qapi/qapi-schema.json | 1 +
job.c | 10 +++
Makefile | 9 +++
Makefile.objs | 4 +
tests/qemu-iotests/030 | 6 +-
tests/qemu-iotests/040 | 2 +
tests/qemu-iotests/041 | 17 ++++-
tests/qemu-iotests/095 | 2 +-
tests/qemu-iotests/095.out | 6 ++
tests/qemu-iotests/109 | 2 +-
tests/qemu-iotests/109.out | 178 +++++++++++++++++++++++++++++++++++++++------
tests/qemu-iotests/124 | 8 ++
tests/qemu-iotests/127.out | 7 ++
tests/qemu-iotests/141 | 10 +--
tests/qemu-iotests/141.out | 29 ++++++++
tests/qemu-iotests/144 | 2 +-
tests/qemu-iotests/144.out | 7 ++
tests/qemu-iotests/156 | 2 +-
tests/qemu-iotests/156.out | 7 ++
tests/qemu-iotests/185 | 12 +--
tests/qemu-iotests/185.out | 10 +++
tests/qemu-iotests/191 | 4 +-
tests/qemu-iotests/191.out | 132 +++++++++++++++++++++++++++++++++
25 files changed, 492 insertions(+), 87 deletions(-)
create mode 100644 qapi/job.json
Your effort to keep this series in 42 patches in Ehren, but I think it
would make sense to have a separate patch that creates qapi/job.json. ;-)

[...]
diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
index 640a6dfd10..03aea460c9 100755
--- a/tests/qemu-iotests/030
+++ b/tests/qemu-iotests/030
result = self.vm.qmp('block-stream', device='node5', base=self.imgs[3], job_id='stream-node6')
self.assert_qmp(result, 'error/class', 'GenericError')
- event = self.vm.get_qmp_event(wait=True)
+ event = self.vm.event_wait(name='BLOCK_JOB_READY')
self.assertEqual(event['event'], 'BLOCK_JOB_READY')
This assertion is a bit useless, now, but whatever.
self.assert_qmp(event, 'data/device', 'commit-drive0')
self.assert_qmp(event, 'data/type', 'commit')
time.sleep(0.1)
events = self.vm.get_qmp_events(wait=False)
- self.assertEqual(events, [], 'unexpected QMP event: %s' % events)
+ self.assertEqual(events, [], 'unexpected QMP event: %s' % events)
self.fail() would be nicer to read.
self.cancel_and_wait(resume=True)
[...]
diff --git a/tests/qemu-iotests/041 b/tests/qemu-iotests/041
index a860a31e9a..e94587950c 100755
--- a/tests/qemu-iotests/041
+++ b/tests/qemu-iotests/041
@@ -445,6 +445,8 @@ new_state = "1"
self.assert_qmp(event, 'data/device', 'drive0')
self.assert_qmp(event, 'data/error', 'Input/output error')
completed = True
+ self.assert_qmp(event, 'data/id', 'drive0')
There were plenty of such loops in 030. Why did you leave them as they
are and add this check here?

(I can understand it in the previous hunk, because that one has an else
branch, but this one does not.)

((And if you decide to always do this check, iotests.py needs it, too.))
self.assert_no_active_block_jobs()
self.vm.shutdown()
@@ -457,6 +459,10 @@ new_state = "1"
self.assert_qmp(result, 'return', {})
event = self.vm.get_qmp_event(wait=True)
+ self.assert_qmp(event, 'data/id', 'drive0')
+ event = self.vm.get_qmp_event(wait=True)
+
self.assertEquals(event['event'], 'BLOCK_JOB_ERROR')
Wouldn't this be simpler with
event = self.vm.event_wait(name='BLOCK_JOB_ERROR')?

Same in other hunks in this file.

(And technically in all cases (like 056) that use event_wait already.)

(Sure, if you want to check @id... But then you'd have to do the same
in 030. And I don't quite see the point of checking JOB_STATUS_CHANGE
just sporadically, and not even checking the target status.)
self.assert_qmp(event, 'data/device', 'drive0')
self.assert_qmp(event, 'data/operation', 'read')
[...]
diff --git a/tests/qemu-iotests/141 b/tests/qemu-iotests/141
index 2f9d7b9bc2..9ae23a6c63 100755
--- a/tests/qemu-iotests/141
+++ b/tests/qemu-iotests/141
[...]
@@ -179,7 +179,7 @@ test_blockjob \
'device': 'drv0',
'speed': 1}}" \
'return' \
- 'BLOCK_JOB_CANCELLED'
+ '"status": "null"'
The comment above this hunk says that "no event other than
BLOCK_JOB_CANCELLED will be emitted". It would make sense to change it
to e.g. "no block job event other than [...]".
_cleanup_qemu
[...]

You forgot to adjust 094, and although I cannot prove it (I don't have
an nfs setup handy right now), I have a hunch that this patch breaks 173
as well.

Max
Kevin Wolf
2018-05-16 16:15:04 UTC
Permalink
Post by Max Reitz
You forgot to adjust 094, and although I cannot prove it (I don't have
an nfs setup handy right now), I have a hunch that this patch breaks 173
as well.
NFS qemu-iotests look completely broken. And the block driver, too. I've
sent a fix for the block driver at least...

Not sure how the qemu-iotests are supposed to work, but I get messages
like these:

+mkdir: cannot create directory 'nfs://127.0.0.1//home/kwolf/source/qemu/tests/qemu-iotests/scratch': No such file or directory
+common.config: Error: $TEST_DIR (nfs://127.0.0.1//home/kwolf/source/qemu/tests/qemu-iotests/scratch) is not a directory

I guess I'm okay with breaking 173 in this series when the NFS tests are
in this state...

Kevin
Max Reitz
2018-05-14 22:31:23 UTC
Permalink
This adds QMP commands that control the transition between states of the
job lifecycle.
---
qapi/job.json | 112 ++++++++++++++++++++++++++++++++++++++++++++++
job-qmp.c | 140 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
MAINTAINERS | 1 +
Makefile.objs | 2 +-
trace-events | 9 ++++
5 files changed, 263 insertions(+), 1 deletion(-)
create mode 100644 job-qmp.c
diff --git a/qapi/job.json b/qapi/job.json
index bd88a358d0..7b84158292 100644
--- a/qapi/job.json
+++ b/qapi/job.json
@@ -63,3 +63,115 @@
{ 'event': 'JOB_STATUS_CHANGE',
'data': { 'id': 'str',
'status': 'JobStatus' } }
+
+##
+#
+# Pause an active job.
+#
+# This command returns immediately after marking the active job for pausing.
+# Pausing an already paused job has no cumulative effect; a single job-resume
+# command will resume the job.
Pausing an already paused job is, in fact, an error.

(Which should be noted here instead of making it appear like it'd be
idempotent.)
+#
+# The job will pause as soon as possible, which means transitioning into the
+# PAUSED state if it was RUNNING, or into STANDBY if it was READY. The
+# corresponding JOB_STATUS_CHANGE event will be emitted.
+#
+# Cancelling a paused job automatically resumes it.
+#
+#
+# Since: 2.13
+##
+{ 'command': 'job-pause', 'data': { 'id': 'str' } }
[...]
+##
+#
+# Instruct an active background job to cancel at the next opportunity.
+# This command returns immediately after marking the active job for
+# cancellation.
+#
+# The job will cancel as soon as possible and then emit a JOB_STATUS_CHANGE
+# event. Usually, the status will change to ABORTING, but it is possible that
+# a job successfully completes (e.g. because it was almost done and there was
+# no opportunity to cancel earlier than completing the job) and transitions to
+# PENDING instead.
+#
+# Note that if you issue 'job-cancel' after a mirror block job has indicated
+# (via the event BLOCK_JOB_READY, and by transitioning into the READY state)
+# that the source and destination are synchronized, then the job always
+# completes successfully and transitions to PENDING as well as triggering the
+# event BLOCK_JOB_COMPLETED, to indicate that the mirroring has ended and the
+# destination now has a point-in-time copy tied to the time of the
+# cancellation.
+#
+#
+# immediately (even if it is paused) instead of waiting for the
+# destination to complete its final synchronization
The note on "final synchronization" is extremely mirror-specific. I see
three choices here:

(1) If mirror stays the only job with this special cancel semantics,
then we are free to note that this is a mirror-specific flag here.

(2) Even if some other job might come along at some point where use of
@force may make sense, that doesn't stop us from now noting that only
mirror supports this, which helps readers understand what "destination"
and "final synchronization" mean.

(Yes, so (1) and (2) are basically the same.)

(3) We try to find some general description and drop the last part.
Like "If a job would normally decide to complete instead of actually
aborting, this flag can be used to convince it otherwise." But that's
so handwavy, I'd rather just mark it as a special mirror flag for now.
+#
+# Since: 2.13
+##
+{ 'command': 'job-cancel', 'data': { 'id': 'str', '*force': 'bool' } }
[...]
diff --git a/Makefile.objs b/Makefile.objs
index 3df8d58e49..253e0356f3 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -66,7 +66,7 @@ chardev-obj-y = chardev/
# block-obj-y is code used by both qemu system emulation and qemu-img
block-obj-y += nbd/
-block-obj-y += block.o blockjob.o job.o
+block-obj-y += block.o blockjob.o job.o job-qmp.o
Shouldn't this be in common-obj-y like blockdev?

Max
block-obj-y += block/ scsi/
block-obj-y += qemu-io-cmds.o
block-obj-$(CONFIG_REPLICATION) += replication.o
Kevin Wolf
2018-05-15 14:08:50 UTC
Permalink
Post by Max Reitz
This adds QMP commands that control the transition between states of the
job lifecycle.
---
qapi/job.json | 112 ++++++++++++++++++++++++++++++++++++++++++++++
job-qmp.c | 140 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
MAINTAINERS | 1 +
Makefile.objs | 2 +-
trace-events | 9 ++++
5 files changed, 263 insertions(+), 1 deletion(-)
create mode 100644 job-qmp.c
diff --git a/qapi/job.json b/qapi/job.json
index bd88a358d0..7b84158292 100644
--- a/qapi/job.json
+++ b/qapi/job.json
@@ -63,3 +63,115 @@
{ 'event': 'JOB_STATUS_CHANGE',
'data': { 'id': 'str',
'status': 'JobStatus' } }
+
+##
+#
+# Pause an active job.
+#
+# This command returns immediately after marking the active job for pausing.
+# Pausing an already paused job has no cumulative effect; a single job-resume
+# command will resume the job.
Pausing an already paused job is, in fact, an error.
(Which should be noted here instead of making it appear like it'd be
idempotent.)
Aye. Interestingly, I managed to fix it below in job-resume, but missed
it here. I should also stick in a patch somewhere early in the series
that fixes the documentation of block-job-pause/resume.
Post by Max Reitz
+##
+#
+# Instruct an active background job to cancel at the next opportunity.
+# This command returns immediately after marking the active job for
+# cancellation.
+#
+# The job will cancel as soon as possible and then emit a JOB_STATUS_CHANGE
+# event. Usually, the status will change to ABORTING, but it is possible that
+# a job successfully completes (e.g. because it was almost done and there was
+# no opportunity to cancel earlier than completing the job) and transitions to
+# PENDING instead.
+#
+# Note that if you issue 'job-cancel' after a mirror block job has indicated
+# (via the event BLOCK_JOB_READY, and by transitioning into the READY state)
+# that the source and destination are synchronized, then the job always
+# completes successfully and transitions to PENDING as well as triggering the
+# event BLOCK_JOB_COMPLETED, to indicate that the mirroring has ended and the
+# destination now has a point-in-time copy tied to the time of the
+# cancellation.
+#
+#
+# immediately (even if it is paused) instead of waiting for the
+# destination to complete its final synchronization
The note on "final synchronization" is extremely mirror-specific. I see
(1) If mirror stays the only job with this special cancel semantics,
then we are free to note that this is a mirror-specific flag here.
(2) Even if some other job might come along at some point where use of
@force may make sense, that doesn't stop us from now noting that only
mirror supports this, which helps readers understand what "destination"
and "final synchronization" mean.
(Yes, so (1) and (2) are basically the same.)
(3) We try to find some general description and drop the last part.
Like "If a job would normally decide to complete instead of actually
aborting, this flag can be used to convince it otherwise." But that's
so handwavy, I'd rather just mark it as a special mirror flag for now.
Or how about this one?

(4) Mirror is really abusing cancel for a second completion mode and we
don't want to have this kind of ugliness in job-cancel. Remove
@force from the schema and internally always use force=true. For
now, block-job-cancel does the job (no pun intended) for mirror, and
maybe we can later introduce a way to select completion mode with
job-complete.

This would also get us rid of that whole long paragraph above that
explains how mirror jobs have an exceptional behaviour.
Post by Max Reitz
diff --git a/Makefile.objs b/Makefile.objs
index 3df8d58e49..253e0356f3 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -66,7 +66,7 @@ chardev-obj-y = chardev/
# block-obj-y is code used by both qemu system emulation and qemu-img
block-obj-y += nbd/
-block-obj-y += block.o blockjob.o job.o
+block-obj-y += block.o blockjob.o job.o job-qmp.o
Shouldn't this be in common-obj-y like blockdev?
Seems to build with that change, so it can't be wrong...

Kevin
Max Reitz
2018-05-16 10:54:17 UTC
Permalink
Post by Kevin Wolf
Post by Max Reitz
This adds QMP commands that control the transition between states of the
job lifecycle.
---
qapi/job.json | 112 ++++++++++++++++++++++++++++++++++++++++++++++
job-qmp.c | 140 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
MAINTAINERS | 1 +
Makefile.objs | 2 +-
trace-events | 9 ++++
5 files changed, 263 insertions(+), 1 deletion(-)
create mode 100644 job-qmp.c
diff --git a/qapi/job.json b/qapi/job.json
index bd88a358d0..7b84158292 100644
--- a/qapi/job.json
+++ b/qapi/job.json
@@ -63,3 +63,115 @@
{ 'event': 'JOB_STATUS_CHANGE',
'data': { 'id': 'str',
'status': 'JobStatus' } }
+
+##
+#
+# Pause an active job.
+#
+# This command returns immediately after marking the active job for pausing.
+# Pausing an already paused job has no cumulative effect; a single job-resume
+# command will resume the job.
Pausing an already paused job is, in fact, an error.
(Which should be noted here instead of making it appear like it'd be
idempotent.)
Aye. Interestingly, I managed to fix it below in job-resume, but missed
it here. I should also stick in a patch somewhere early in the series
that fixes the documentation of block-job-pause/resume.
Post by Max Reitz
+##
+#
+# Instruct an active background job to cancel at the next opportunity.
+# This command returns immediately after marking the active job for
+# cancellation.
+#
+# The job will cancel as soon as possible and then emit a JOB_STATUS_CHANGE
+# event. Usually, the status will change to ABORTING, but it is possible that
+# a job successfully completes (e.g. because it was almost done and there was
+# no opportunity to cancel earlier than completing the job) and transitions to
+# PENDING instead.
+#
+# Note that if you issue 'job-cancel' after a mirror block job has indicated
+# (via the event BLOCK_JOB_READY, and by transitioning into the READY state)
+# that the source and destination are synchronized, then the job always
+# completes successfully and transitions to PENDING as well as triggering the
+# event BLOCK_JOB_COMPLETED, to indicate that the mirroring has ended and the
+# destination now has a point-in-time copy tied to the time of the
+# cancellation.
+#
+#
+# immediately (even if it is paused) instead of waiting for the
+# destination to complete its final synchronization
The note on "final synchronization" is extremely mirror-specific. I see
(1) If mirror stays the only job with this special cancel semantics,
then we are free to note that this is a mirror-specific flag here.
(2) Even if some other job might come along at some point where use of
@force may make sense, that doesn't stop us from now noting that only
mirror supports this, which helps readers understand what "destination"
and "final synchronization" mean.
(Yes, so (1) and (2) are basically the same.)
(3) We try to find some general description and drop the last part.
Like "If a job would normally decide to complete instead of actually
aborting, this flag can be used to convince it otherwise." But that's
so handwavy, I'd rather just mark it as a special mirror flag for now.
Or how about this one?
(4) Mirror is really abusing cancel for a second completion mode and we
don't want to have this kind of ugliness in job-cancel.
Yeah, well, right, that was implicit and I tried to be more diplomatic
by calling it "special semantics". :-)
Post by Kevin Wolf
Remove
@force from the schema and internally always use force=true. For
now, block-job-cancel does the job (no pun intended) for mirror, and
maybe we can later introduce a way to select completion mode with
job-complete.
Sounds good to me.
Post by Kevin Wolf
This would also get us rid of that whole long paragraph above that
explains how mirror jobs have an exceptional behaviour.
Yes.

Max
Post by Kevin Wolf
Post by Max Reitz
diff --git a/Makefile.objs b/Makefile.objs
index 3df8d58e49..253e0356f3 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -66,7 +66,7 @@ chardev-obj-y = chardev/
# block-obj-y is code used by both qemu system emulation and qemu-img
block-obj-y += nbd/
-block-obj-y += block.o blockjob.o job.o
+block-obj-y += block.o blockjob.o job.o job-qmp.o
Shouldn't this be in common-obj-y like blockdev?
Seems to build with that change, so it can't be wrong...
Kevin
Max Reitz
2018-05-14 22:26:07 UTC
Permalink
This moves block_job_dismiss() to the Job layer.
---
include/block/blockjob.h | 9 ---------
include/qemu/job.h | 7 ++++++-
blockdev.c | 8 +++++---
blockjob.c | 13 -------------
job.c | 15 ++++++++++++++-
tests/test-blockjob.c | 4 ++--
6 files changed, 27 insertions(+), 29 deletions(-)
[...]
diff --git a/blockdev.c b/blockdev.c
index 31319a6d5a..15753d9719 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3915,14 +3915,16 @@ void qmp_block_job_finalize(const char *id, Error **errp)
void qmp_block_job_dismiss(const char *id, Error **errp)
{
AioContext *aio_context;
- BlockJob *job = find_block_job(id, &aio_context, errp);
+ BlockJob *bjob = find_block_job(id, &aio_context, errp);
+ Job *job;
clang (probably gcc as well if I had enabled tracing in that build...)
complains that this should be initialized to NULL because of...
- if (!job) {
+ if (!bjob) {
return;
}
trace_qmp_block_job_dismiss(job);
...this.

Max
- block_job_dismiss(&job, errp);
+ job = &bjob->job;
+ job_dismiss(&job, errp);
aio_context_release(aio_context);
}
John Snow
2018-05-14 22:33:00 UTC
Permalink
Hmm, this one is a bit more than just code motion due to the way the
aio_context acquisition has changed. I think at a minimum a good commit
message is warranted.
---
include/block/blockjob.h | 5 ----
include/block/blockjob_int.h | 19 ---------------
include/qemu/job.h | 20 ++++++++++++++++
block/backup.c | 7 +++---
block/commit.c | 11 +++++----
block/mirror.c | 15 ++++++------
block/stream.c | 14 +++++------
blockjob.c | 57 ++++----------------------------------------
job.c | 33 +++++++++++++++++++++++++
tests/test-bdrv-drain.c | 7 +++---
tests/test-blockjob-txn.c | 13 +++++-----
tests/test-blockjob.c | 7 +++---
12 files changed, 98 insertions(+), 110 deletions(-)
diff --git a/include/block/blockjob.h b/include/block/blockjob.h
index 04efc94ffc..90942826f5 100644
--- a/include/block/blockjob.h
+++ b/include/block/blockjob.h
@@ -92,11 +92,6 @@ typedef struct BlockJob {
*/
bool ready;
- /**
- * Set to true when the job has deferred work to the main loop.
- */
- bool deferred_to_main_loop;
-
/** Status that is published by the query-block-jobs QMP API */
BlockDeviceIoStatus iostatus;
diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h
index d64f30e6b0..0c2f8de381 100644
--- a/include/block/blockjob_int.h
+++ b/include/block/blockjob_int.h
@@ -233,23 +233,4 @@ void block_job_event_ready(BlockJob *job);
BlockErrorAction block_job_error_action(BlockJob *job, BlockdevOnError on_err,
int is_read, int error);
-typedef void BlockJobDeferToMainLoopFn(BlockJob *job, void *opaque);
-
-/**
- *
- * This function must be called by the main job coroutine just before it
- * AioContext acquired. Block jobs must call bdrv_unref(), bdrv_close(), and
- * anything that uses bdrv_drain_all() in the main loop.
- *
- */
-void block_job_defer_to_main_loop(BlockJob *job,
- BlockJobDeferToMainLoopFn *fn,
- void *opaque);
-
#endif
diff --git a/include/qemu/job.h b/include/qemu/job.h
index 1821f9ebd7..1a20534235 100644
--- a/include/qemu/job.h
+++ b/include/qemu/job.h
@@ -58,6 +58,9 @@ typedef struct Job {
*/
bool cancelled;
+ /** Set to true when the job has deferred work to the main loop. */
+ bool deferred_to_main_loop;
+
/** Element of the list of jobs */
QLIST_ENTRY(Job) job_list;
} Job;
@@ -131,6 +134,23 @@ Job *job_get(const char *id);
*/
int job_apply_verb(Job *job, JobVerb bv, Error **errp);
+typedef void JobDeferToMainLoopFn(Job *job, void *opaque);
+
+/**
+ *
+ * This function must be called by the main job coroutine just before it
+ *
+ * Block jobs must call bdrv_unref(), bdrv_close(), and anything that uses
+ * bdrv_drain_all() in the main loop.
+ *
+ */
+void job_defer_to_main_loop(Job *job, JobDeferToMainLoopFn *fn, void *opaque);
+
/* TODO To be removed from the public interface */
void job_state_transition(Job *job, JobStatus s1);
diff --git a/block/backup.c b/block/backup.c
index ef0aa0e24e..22dd368c90 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -317,11 +317,12 @@ typedef struct {
int ret;
} BackupCompleteData;
-static void backup_complete(BlockJob *job, void *opaque)
+static void backup_complete(Job *job, void *opaque)
{
+ BlockJob *bjob = container_of(job, BlockJob, job);
BackupCompleteData *data = opaque;
- block_job_completed(job, data->ret);
+ block_job_completed(bjob, data->ret);
g_free(data);
}
@@ -519,7 +520,7 @@ static void coroutine_fn backup_run(void *opaque)
data = g_malloc(sizeof(*data));
data->ret = ret;
- block_job_defer_to_main_loop(&job->common, backup_complete, data);
+ job_defer_to_main_loop(&job->common.job, backup_complete, data);
}
static const BlockJobDriver backup_job_driver = {
diff --git a/block/commit.c b/block/commit.c
index 85baea8f92..d326766e4d 100644
--- a/block/commit.c
+++ b/block/commit.c
@@ -72,9 +72,10 @@ typedef struct {
int ret;
} CommitCompleteData;
-static void commit_complete(BlockJob *job, void *opaque)
+static void commit_complete(Job *job, void *opaque)
{
- CommitBlockJob *s = container_of(job, CommitBlockJob, common);
+ CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
+ BlockJob *bjob = &s->common;
CommitCompleteData *data = opaque;
BlockDriverState *top = blk_bs(s->top);
BlockDriverState *base = blk_bs(s->base);
@@ -90,7 +91,7 @@ static void commit_complete(BlockJob *job, void *opaque)
* the normal backing chain can be restored. */
blk_unref(s->base);
- if (!job_is_cancelled(&s->common.job) && ret == 0) {
+ if (!job_is_cancelled(job) && ret == 0) {
/* success */
ret = bdrv_drop_intermediate(s->commit_top_bs, base,
s->backing_file_str);
@@ -114,7 +115,7 @@ static void commit_complete(BlockJob *job, void *opaque)
* block_job_finish_sync()), block_job_completed() won't free it and
* therefore the blockers on the intermediate nodes remain. This would
* cause bdrv_set_backing_hd() to fail. */
- block_job_remove_all_bdrv(job);
+ block_job_remove_all_bdrv(bjob);
block_job_completed(&s->common, ret);
g_free(data);
data = g_malloc(sizeof(*data));
data->ret = ret;
- block_job_defer_to_main_loop(&s->common, commit_complete, data);
+ job_defer_to_main_loop(&s->common.job, commit_complete, data);
}
static const BlockJobDriver commit_job_driver = {
diff --git a/block/mirror.c b/block/mirror.c
index 163d83e34a..4929191b81 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -484,9 +484,10 @@ typedef struct {
int ret;
} MirrorExitData;
-static void mirror_exit(BlockJob *job, void *opaque)
+static void mirror_exit(Job *job, void *opaque)
{
- MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
+ MirrorBlockJob *s = container_of(job, MirrorBlockJob, common.job);
+ BlockJob *bjob = &s->common;
MirrorExitData *data = opaque;
AioContext *replace_aio_context = NULL;
BlockDriverState *src = s->source;
@@ -568,7 +569,7 @@ static void mirror_exit(BlockJob *job, void *opaque)
* the blockers on the intermediate nodes so that the resulting state is
* valid. Also give up permissions on mirror_top_bs->backing, which might
* block the removal. */
- block_job_remove_all_bdrv(job);
+ block_job_remove_all_bdrv(bjob);
bdrv_child_try_set_perm(mirror_top_bs->backing, 0, BLK_PERM_ALL,
&error_abort);
bdrv_replace_node(mirror_top_bs, backing_bs(mirror_top_bs), &error_abort);
@@ -576,9 +577,9 @@ static void mirror_exit(BlockJob *job, void *opaque)
/* We just changed the BDS the job BB refers to (with either or both of the
* bdrv_replace_node() calls), so switch the BB back so the cleanup does
* the right thing. We don't need any permissions any more now. */
- blk_remove_bs(job->blk);
- blk_set_perm(job->blk, 0, BLK_PERM_ALL, &error_abort);
- blk_insert_bs(job->blk, mirror_top_bs, &error_abort);
+ blk_remove_bs(bjob->blk);
+ blk_set_perm(bjob->blk, 0, BLK_PERM_ALL, &error_abort);
+ blk_insert_bs(bjob->blk, mirror_top_bs, &error_abort);
block_job_completed(&s->common, data->ret);
if (need_drain) {
bdrv_drained_begin(bs);
}
- block_job_defer_to_main_loop(&s->common, mirror_exit, data);
+ job_defer_to_main_loop(&s->common.job, mirror_exit, data);
}
static void mirror_complete(BlockJob *job, Error **errp)
diff --git a/block/stream.c b/block/stream.c
index 22c71ae100..0bba81678c 100644
--- a/block/stream.c
+++ b/block/stream.c
@@ -58,16 +58,16 @@ typedef struct {
int ret;
} StreamCompleteData;
-static void stream_complete(BlockJob *job, void *opaque)
+static void stream_complete(Job *job, void *opaque)
{
- StreamBlockJob *s = container_of(job, StreamBlockJob, common);
+ StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
+ BlockJob *bjob = &s->common;
StreamCompleteData *data = opaque;
- BlockDriverState *bs = blk_bs(job->blk);
+ BlockDriverState *bs = blk_bs(bjob->blk);
BlockDriverState *base = s->base;
Error *local_err = NULL;
- if (!job_is_cancelled(&s->common.job) && bs->backing &&
- data->ret == 0) {
+ if (!job_is_cancelled(job) && bs->backing && data->ret == 0) {
const char *base_id = NULL, *base_fmt = NULL;
if (base) {
base_id = s->backing_file_str;
/* Reopen the image back in read-only mode if necessary */
if (s->bs_flags != bdrv_get_flags(bs)) {
/* Give up write permissions before making it read-only */
- blk_set_perm(job->blk, 0, BLK_PERM_ALL, &error_abort);
+ blk_set_perm(bjob->blk, 0, BLK_PERM_ALL, &error_abort);
bdrv_reopen(bs, s->bs_flags, NULL);
}
/* Modify backing chain and close BDSes in main loop */
data = g_malloc(sizeof(*data));
data->ret = ret;
- block_job_defer_to_main_loop(&s->common, stream_complete, data);
+ job_defer_to_main_loop(&s->common.job, stream_complete, data);
}
static const BlockJobDriver stream_job_driver = {
diff --git a/blockjob.c b/blockjob.c
index d44f5c2e50..6021d885be 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -347,7 +347,7 @@ static void block_job_decommission(BlockJob *job)
job->completed = true;
job->busy = false;
job->paused = false;
- job->deferred_to_main_loop = true;
+ job->job.deferred_to_main_loop = true;
block_job_txn_del_job(job);
job_state_transition(&job->job, JOB_STATUS_NULL);
job_unref(&job->job);
@@ -502,7 +502,7 @@ static int block_job_finish_sync(BlockJob *job,
/* block_job_drain calls block_job_enter, and it should be enough to
* induce progress until the job completes or moves to the main thread.
*/
- while (!job->deferred_to_main_loop && !job->completed) {
+ while (!job->job.deferred_to_main_loop && !job->completed) {
block_job_drain(job);
}
while (!job->completed) {
@@ -722,7 +722,7 @@ void block_job_cancel(BlockJob *job, bool force)
block_job_cancel_async(job, force);
if (!block_job_started(job)) {
block_job_completed(job, -ECANCELED);
- } else if (job->deferred_to_main_loop) {
+ } else if (job->job.deferred_to_main_loop) {
block_job_completed_txn_abort(job);
} else {
block_job_enter(job);
@@ -1038,7 +1038,7 @@ static void block_job_enter_cond(BlockJob *job, bool(*fn)(BlockJob *job))
if (!block_job_started(job)) {
return;
}
- if (job->deferred_to_main_loop) {
+ if (job->job.deferred_to_main_loop) {
return;
}
@@ -1053,7 +1053,7 @@ static void block_job_enter_cond(BlockJob *job, bool(*fn)(BlockJob *job))
return;
}
- assert(!job->deferred_to_main_loop);
+ assert(!job->job.deferred_to_main_loop);
timer_del(&job->sleep_timer);
job->busy = true;
block_job_unlock();
@@ -1159,50 +1159,3 @@ BlockErrorAction block_job_error_action(BlockJob *job, BlockdevOnError on_err,
}
return action;
}
-
-typedef struct {
- BlockJob *job;
- AioContext *aio_context;
- BlockJobDeferToMainLoopFn *fn;
- void *opaque;
-} BlockJobDeferToMainLoopData;
-
-static void block_job_defer_to_main_loop_bh(void *opaque)
-{
- BlockJobDeferToMainLoopData *data = opaque;
- AioContext *aio_context;
-
- /* Prevent race with block_job_defer_to_main_loop() */
- aio_context_acquire(data->aio_context);
-
- /* Fetch BDS AioContext again, in case it has changed */
- aio_context = blk_get_aio_context(data->job->blk);
- if (aio_context != data->aio_context) {
- aio_context_acquire(aio_context);
- }
-
- data->fn(data->job, data->opaque);
-
- if (aio_context != data->aio_context) {
- aio_context_release(aio_context);
- }
-
- aio_context_release(data->aio_context);
-
- g_free(data);
-}
-
-void block_job_defer_to_main_loop(BlockJob *job,
- BlockJobDeferToMainLoopFn *fn,
- void *opaque)
-{
- BlockJobDeferToMainLoopData *data = g_malloc(sizeof(*data));
- data->job = job;
- data->aio_context = blk_get_aio_context(job->blk);
- data->fn = fn;
- data->opaque = opaque;
- job->deferred_to_main_loop = true;
-
- aio_bh_schedule_oneshot(qemu_get_aio_context(),
- block_job_defer_to_main_loop_bh, data);
-}
diff --git a/job.c b/job.c
index 6f97a4317e..b074b3ffd7 100644
--- a/job.c
+++ b/job.c
@@ -28,6 +28,7 @@
#include "qapi/error.h"
#include "qemu/job.h"
#include "qemu/id.h"
+#include "qemu/main-loop.h"
#include "trace-root.h"
static QLIST_HEAD(, Job) jobs = QLIST_HEAD_INITIALIZER(jobs);
@@ -170,3 +171,35 @@ void job_unref(Job *job)
g_free(job);
}
}
+
+typedef struct {
+ Job *job;
+ JobDeferToMainLoopFn *fn;
+ void *opaque;
+} JobDeferToMainLoopData;
+
+static void job_defer_to_main_loop_bh(void *opaque)
+{
+ JobDeferToMainLoopData *data = opaque;
+ Job *job = data->job;
+ AioContext *aio_context = job->aio_context;
+
+ /* Prevent race with job_defer_to_main_loop() */
+ aio_context_acquire(aio_context);
+ data->fn(data->job, data->opaque);
+ aio_context_release(aio_context);
+
+ g_free(data);
+}
+
This function showed up in '14, with dec7d42 from Stefan. His first
draft looked like Kevin's, until:

https://lists.gnu.org/archive/html/qemu-devel/2014-10/msg00111.html

Max, from 2014:

"I'm not so sure whether it'd be possible to change the BDS's AIO
context (in another thread) after the call to bdrv_get_aio_context() in
block_job_defer_to_main_loop() and before
block_job_defer_to_main_loop_bh() is run. If so, this might create race
conditions."

Err, I dunno either.
+void job_defer_to_main_loop(Job *job, JobDeferToMainLoopFn *fn, void *opaque)
+{
+ JobDeferToMainLoopData *data = g_malloc(sizeof(*data));
+ data->job = job;
+ data->fn = fn;
+ data->opaque = opaque;
+ job->deferred_to_main_loop = true;
+
+ aio_bh_schedule_oneshot(qemu_get_aio_context(),
+ job_defer_to_main_loop_bh, data);
+}
diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
index f9e37d479c..4f8cba8377 100644
--- a/tests/test-bdrv-drain.c
+++ b/tests/test-bdrv-drain.c
@@ -496,9 +496,10 @@ typedef struct TestBlockJob {
bool should_complete;
} TestBlockJob;
-static void test_job_completed(BlockJob *job, void *opaque)
+static void test_job_completed(Job *job, void *opaque)
{
- block_job_completed(job, 0);
+ BlockJob *bjob = container_of(job, BlockJob, job);
+ block_job_completed(bjob, 0);
}
static void coroutine_fn test_job_start(void *opaque)
@@ -510,7 +511,7 @@ static void coroutine_fn test_job_start(void *opaque)
block_job_sleep_ns(&s->common, 100000);
}
- block_job_defer_to_main_loop(&s->common, test_job_completed, NULL);
+ job_defer_to_main_loop(&s->common.job, test_job_completed, NULL);
}
static void test_job_complete(BlockJob *job, Error **errp)
diff --git a/tests/test-blockjob-txn.c b/tests/test-blockjob-txn.c
index 26b4bbb230..c03f9662d8 100644
--- a/tests/test-blockjob-txn.c
+++ b/tests/test-blockjob-txn.c
@@ -24,16 +24,17 @@ typedef struct {
int *result;
} TestBlockJob;
-static void test_block_job_complete(BlockJob *job, void *opaque)
+static void test_block_job_complete(Job *job, void *opaque)
{
- BlockDriverState *bs = blk_bs(job->blk);
+ BlockJob *bjob = container_of(job, BlockJob, job);
+ BlockDriverState *bs = blk_bs(bjob->blk);
int rc = (intptr_t)opaque;
- if (job_is_cancelled(&job->job)) {
+ if (job_is_cancelled(job)) {
rc = -ECANCELED;
}
- block_job_completed(job, rc);
+ block_job_completed(bjob, rc);
bdrv_unref(bs);
}
@@ -54,8 +55,8 @@ static void coroutine_fn test_block_job_run(void *opaque)
}
}
- block_job_defer_to_main_loop(job, test_block_job_complete,
- (void *)(intptr_t)s->rc);
+ job_defer_to_main_loop(&job->job, test_block_job_complete,
+ (void *)(intptr_t)s->rc);
}
typedef struct {
diff --git a/tests/test-blockjob.c b/tests/test-blockjob.c
index fa31481537..5f43bd72a4 100644
--- a/tests/test-blockjob.c
+++ b/tests/test-blockjob.c
@@ -161,11 +161,12 @@ typedef struct CancelJob {
bool completed;
} CancelJob;
-static void cancel_job_completed(BlockJob *job, void *opaque)
+static void cancel_job_completed(Job *job, void *opaque)
{
+ BlockJob *bjob = container_of(job, BlockJob, job);
CancelJob *s = opaque;
s->completed = true;
- block_job_completed(job, 0);
+ block_job_completed(bjob, 0);
}
static void cancel_job_complete(BlockJob *job, Error **errp)
@@ -191,7 +192,7 @@ static void coroutine_fn cancel_job_start(void *opaque)
}
- block_job_defer_to_main_loop(&s->common, cancel_job_completed, s);
+ job_defer_to_main_loop(&s->common.job, cancel_job_completed, s);
}
static const BlockJobDriver test_cancel_driver = {
Kevin Wolf
2018-05-15 12:22:35 UTC
Permalink
Post by John Snow
Hmm, this one is a bit more than just code motion due to the way the
aio_context acquisition has changed. I think at a minimum a good commit
message is warranted.
I know it took a while to convince myself that it's right. I wonder
where the commit message has gone...
Post by John Snow
@@ -170,3 +171,35 @@ void job_unref(Job *job)
g_free(job);
}
}
+
+typedef struct {
+ Job *job;
+ JobDeferToMainLoopFn *fn;
+ void *opaque;
+} JobDeferToMainLoopData;
+
+static void job_defer_to_main_loop_bh(void *opaque)
+{
+ JobDeferToMainLoopData *data = opaque;
+ Job *job = data->job;
+ AioContext *aio_context = job->aio_context;
+
+ /* Prevent race with job_defer_to_main_loop() */
+ aio_context_acquire(aio_context);
+ data->fn(data->job, data->opaque);
+ aio_context_release(aio_context);
+
+ g_free(data);
+}
+
This function showed up in '14, with dec7d42 from Stefan. His first
https://lists.gnu.org/archive/html/qemu-devel/2014-10/msg00111.html
"I'm not so sure whether it'd be possible to change the BDS's AIO
context (in another thread) after the call to bdrv_get_aio_context() in
block_job_defer_to_main_loop() and before
block_job_defer_to_main_loop_bh() is run. If so, this might create race
conditions."
Err, I dunno either.
This one at least is not a problem in the new code any more because we
have job->aio_context now, which is updated when the BDS's context
changes. We read job->aio_context only in the BH, so we always get the
current one.

In contrast, the old code put it into BlockJobDeferToMainLoopData at
the time when the BH was scheduled, so it could be outdated when the BH
actually ran.

Kevin
John Snow
2018-05-14 23:02:18 UTC
Permalink
This commit moves some core functions for dealing with the job coroutine
from BlockJob to Job. This includes primarily entering the coroutine
(both for the first and reentering) and yielding explicitly and at pause
points.
---
include/block/blockjob.h | 40 ---------
include/block/blockjob_int.h | 26 ------
include/qemu/job.h | 76 ++++++++++++++++
block/backup.c | 2 +-
block/commit.c | 4 +-
block/mirror.c | 22 ++---
block/replication.c | 2 +-
block/stream.c | 4 +-
blockdev.c | 8 +-
blockjob.c | 201 +++++++------------------------------------
job.c | 137 +++++++++++++++++++++++++++++
tests/test-bdrv-drain.c | 38 ++++----
tests/test-blockjob-txn.c | 12 +--
tests/test-blockjob.c | 14 +--
14 files changed, 296 insertions(+), 290 deletions(-)
[...]
/* Assumes the block_job_mutex is held */
-static bool block_job_timer_pending(BlockJob *job)
+static bool job_timer_pending(Job *job)
Is this intentionally left behind in blockjob.c, even once you've
changed the name (and moved the state to job.c?)

[...]
+void job_start(Job *job)
+{
+ assert(job && !job_started(job) && job->paused &&
+ job->driver && job->driver->start);
+ job->co = qemu_coroutine_create(job_co_entry, job);
+ job->pause_count--;
+ job->busy = true;
+ job->paused = false;
+ job_state_transition(job, JOB_STATUS_RUNNING);
+ aio_co_enter(job->aio_context, job->co);
I suppose patch 16 was the time to ask this, but are there any
detriments to setting the AIO Context at creation time and then using
that consistently instead of just fetching the AIO context of the BDS
whenever we need it?

I guess if the context changes then we've got it covered in
block_job_attached_aio_context.

Reviewed-by: John Snow <***@redhat.com>
Kevin Wolf
2018-05-16 16:50:04 UTC
Permalink
Post by John Snow
This commit moves some core functions for dealing with the job coroutine
from BlockJob to Job. This includes primarily entering the coroutine
(both for the first and reentering) and yielding explicitly and at pause
points.
---
include/block/blockjob.h | 40 ---------
include/block/blockjob_int.h | 26 ------
include/qemu/job.h | 76 ++++++++++++++++
block/backup.c | 2 +-
block/commit.c | 4 +-
block/mirror.c | 22 ++---
block/replication.c | 2 +-
block/stream.c | 4 +-
blockdev.c | 8 +-
blockjob.c | 201 +++++++------------------------------------
job.c | 137 +++++++++++++++++++++++++++++
tests/test-bdrv-drain.c | 38 ++++----
tests/test-blockjob-txn.c | 12 +--
tests/test-blockjob.c | 14 +--
14 files changed, 296 insertions(+), 290 deletions(-)
[...]
/* Assumes the block_job_mutex is held */
-static bool block_job_timer_pending(BlockJob *job)
+static bool job_timer_pending(Job *job)
Is this intentionally left behind in blockjob.c, even once you've
changed the name (and moved the state to job.c?)
Yes, it's just a small callback that is structually part of
block_job_set_speed(). Maybe I shouldn't rename it, but it does get a
Job rather than a BlockJob now, so I'm not sure.

Kevin
John Snow
2018-05-16 17:38:18 UTC
Permalink
Post by Kevin Wolf
Post by John Snow
This commit moves some core functions for dealing with the job coroutine
from BlockJob to Job. This includes primarily entering the coroutine
(both for the first and reentering) and yielding explicitly and at pause
points.
---
include/block/blockjob.h | 40 ---------
include/block/blockjob_int.h | 26 ------
include/qemu/job.h | 76 ++++++++++++++++
block/backup.c | 2 +-
block/commit.c | 4 +-
block/mirror.c | 22 ++---
block/replication.c | 2 +-
block/stream.c | 4 +-
blockdev.c | 8 +-
blockjob.c | 201 +++++++------------------------------------
job.c | 137 +++++++++++++++++++++++++++++
tests/test-bdrv-drain.c | 38 ++++----
tests/test-blockjob-txn.c | 12 +--
tests/test-blockjob.c | 14 +--
14 files changed, 296 insertions(+), 290 deletions(-)
[...]
/* Assumes the block_job_mutex is held */
-static bool block_job_timer_pending(BlockJob *job)
+static bool job_timer_pending(Job *job)
Is this intentionally left behind in blockjob.c, even once you've
changed the name (and moved the state to job.c?)
Yes, it's just a small callback that is structually part of
block_job_set_speed(). Maybe I shouldn't rename it, but it does get a
Job rather than a BlockJob now, so I'm not sure.
Kevin
Your choice; it's not clear to me yet which things truly belong in which
file because I haven't had a chance to look at the finished result. I'm
sure if it looks like it needs to move later that someone will
eventually move it.

--js
Max Reitz
2018-05-14 23:09:52 UTC
Permalink
This adds a minimal query-jobs implementation that shouldn't pose many
design questions. It can later be extended to expose more information,
and especially job-specific information.
---
qapi/block-core.json | 18 ----------------
qapi/job.json | 59 ++++++++++++++++++++++++++++++++++++++++++++++++++++
include/qemu/job.h | 3 +++
job-qmp.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++
job.c | 2 +-
5 files changed, 117 insertions(+), 19 deletions(-)
diff --git a/qapi/block-core.json b/qapi/block-core.json
index f2da7d696d..d38dfa7836 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1051,24 +1051,6 @@
'data': ['top', 'full', 'none', 'incremental'] }
##
-#
-# Type of a background job.
-#
-#
-#
-#
-#
-# Since: 1.7
-##
-{ 'enum': 'JobType',
- 'data': ['commit', 'stream', 'mirror', 'backup'] }
-
-##
#
# Represents command verbs that can be applied to a job.
diff --git a/qapi/job.json b/qapi/job.json
index 7b84158292..742fa97768 100644
--- a/qapi/job.json
+++ b/qapi/job.json
@@ -5,6 +5,24 @@
##
##
+#
+# Type of a background job.
+#
+#
+#
+#
+#
+# Since: 1.7
+##
+{ 'enum': 'JobType',
+ 'data': ['commit', 'stream', 'mirror', 'backup'] }
+
+##
FWIW, I'd put this code movement into the hypothetical commit which
specifically adds job.json.

(Same as JobVerb, which is not moved in this series, but which very
likely should be.)
#
# Indicates the present state of a given job in its lifetime.
@@ -175,3 +193,44 @@
# Since: 2.13
##
{ 'command': 'job-finalize', 'data': { 'id': 'str' } }
+
+##
+#
+# Information about a job.
+#
+#
I'm not really happy with this description that effectively provides no
information that one cannot already get from the field name, but I
cannot come up with something better, so I'd rather stop typing now.

(OK, my issue here is that "job type" can be anything. That could mean
e.g. "Is this a block job?". Maybe an explicit reference to JobType
might help here, although that is already part of the documentation. On
that note, maybe a quote from the documentation might help make my point
that this description is not very useful:

"type: JobType"
The job type

Maybe "The kind of job that is being performed"?)
Why the "state/status"? Forgive me my incompetence, but I don't see a
real difference here. But in any case, one ought to be enough, no?

(OK, full disclosure: My gut tells me "status" is what we now call the
"progress", and this field should be called "state". But it's called
"status" now and it doesn't really make a difference, so it's fine to
describe it as either.)
+#
+#
Hm, not really. This makes it sound like at any point, this will be the
maximum even in the future, but that's not true.

Maybe "estimated progress maximum"? Or be even more verbose (no, that
doesn't hurt): "This is an estimation of the value @current-progress
needs to reach for the job to complete."

(Actually, I find it important to note that it is an estimation, maybe
we event want to be really explicit about the fact that this value may
change all the time, in any direction.)
+#
+# still missing in the CONCLUDED state, this indicates
+# successful completion.
+#
+# The value is a human-readable error message to describe
+# the reason for the job failure. It should not be parsed
+# by applications.
+#
+# Since: 2.13
+##
+{ 'struct': 'JobInfo',
+ 'data': { 'id': 'str', 'type': 'JobType', 'status': 'JobStatus',
+ 'current-progress': 'int', 'total-progress': 'int',
+ '*error': 'str' } }
+
+##
+#
+# Return information about jobs.
+#
+#
+# Since: 2.13
+##
+{ 'command': 'query-jobs', 'returns': ['JobInfo'] }
[...]
diff --git a/job-qmp.c b/job-qmp.c
index f32cb8b73e..7425a2a490 100644
--- a/job-qmp.c
+++ b/job-qmp.c
@@ -138,3 +138,57 @@ void qmp_job_dismiss(const char *id, Error **errp)
job_dismiss(&job, errp);
aio_context_release(aio_context);
}
+
+static JobInfo *job_query_single(Job *job, Error **errp)
+{
+ JobInfo *info;
+ const char *errmsg = NULL;
+
+ assert (!job_is_internal(job));
One stray space.

Max
+
+ if (job->ret < 0) {
+ errmsg = strerror(-job->ret);
+ }
+
+ info = g_new(JobInfo, 1);
+ *info = (JobInfo) {
+ .id = g_strdup(job->id),
+ .type = job_type(job),
+ .status = job->status,
+ .current_progress = job->progress_current,
+ .total_progress = job->progress_total,
+ .has_error = !!errmsg,
+ .error = g_strdup(errmsg),
+ };
+
+ return info;
+}
+
+JobInfoList *qmp_query_jobs(Error **errp)
+{
+ JobInfoList *head = NULL, **p_next = &head;
+ Job *job;
+
+ for (job = job_next(NULL); job; job = job_next(job)) {
+ JobInfoList *elem;
+ AioContext *aio_context;
+
+ if (job_is_internal(job)) {
+ continue;
+ }
+ elem = g_new0(JobInfoList, 1);
+ aio_context = job->aio_context;
+ aio_context_acquire(aio_context);
+ elem->value = job_query_single(job, errp);
+ aio_context_release(aio_context);
+ if (!elem->value) {
+ g_free(elem);
+ qapi_free_JobInfoList(head);
+ return NULL;
+ }
+ *p_next = elem;
+ p_next = &elem->next;
+ }
+
+ return head;
+}
Max Reitz
2018-05-16 11:27:06 UTC
Permalink
Post by Max Reitz
This adds a minimal query-jobs implementation that shouldn't pose many
design questions. It can later be extended to expose more information,
and especially job-specific information.
+##
+#
+# Information about a job.
+#
+#
I'm not really happy with this description that effectively provides no
information that one cannot already get from the field name, but I
cannot come up with something better, so I'd rather stop typing now.
(OK, my issue here is that "job type" can be anything. That could mean
e.g. "Is this a block job?". Maybe an explicit reference to JobType
might help here, although that is already part of the documentation. On
that note, maybe a quote from the documentation might help make my point
"type: JobType"
The job type
Maybe "The kind of job that is being performed"?)
IMHO, that's just a more verbose way of saying nothing new.
True, but “Job.type: JobType -- The job type” is exactly the kind of
documentation people like to make fun of.
The
"problem" is that "type: JobType" is already descriptive enough, there
is no real need for providing any additional information.
Yes, but it still doesn’t read nicely.

One could give examples (“e.g. mirror or backup”), but then again, if we
were to remove any of those, we’d have to update the documentation here.
Also, you can again argue that such a list is precisely under JobType,
which is true.
Also, I'd like to mention that almost all of the documentation is taken
from BlockJobInfo, so if we decide to change something, we should
probably change it in both places.
Ha! Good excuse, but you know, you touched this, now you own it. :-)
Post by Max Reitz
Why the "state/status"? Forgive me my incompetence, but I don't see a
real difference here. But in any case, one ought to be enough, no?
(OK, full disclosure: My gut tells me "status" is what we now call the
"progress", and this field should be called "state". But it's called
"status" now and it doesn't really make a difference, so it's fine to
describe it as either.)
I'll defer to John who wrote this description originally. I think we're
just using status/state inconsistently for the same thing (JobStatus,
which represents the states of the job state machine).
Post by Max Reitz
+#
+#
Hm, not really. This makes it sound like at any point, this will be the
maximum even in the future, but that's not true.
Maybe "estimated progress maximum"? Or be even more verbose (no, that
needs to reach for the job to complete."
(Actually, I find it important to note that it is an estimation, maybe
we event want to be really explicit about the fact that this value may
change all the time, in any direction.)
I'll try to improve the documentation of these fields (both here and in
BlockJobInfo) for v2.
Thanks!

Max
John Snow
2018-05-14 23:10:10 UTC
Permalink
There is nothing block layer specific about block_job_sleep_ns(), so
move the function to Job.
---
include/block/blockjob_int.h | 11 -----------
include/qemu/job.h | 17 +++++++++++++++++
block/backup.c | 2 +-
block/commit.c | 2 +-
block/mirror.c | 4 ++--
block/stream.c | 2 +-
blockjob.c | 27 ---------------------------
job.c | 32 ++++++++++++++++++++++++++++++++
tests/test-bdrv-drain.c | 8 ++++----
tests/test-blockjob-txn.c | 2 +-
tests/test-blockjob.c | 2 +-
11 files changed, 60 insertions(+), 49 deletions(-)
diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h
index 0a614a89b8..8937f5b163 100644
--- a/include/block/blockjob_int.h
+++ b/include/block/blockjob_int.h
@@ -134,17 +134,6 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
void block_job_free(Job *job);
/**
- *
- * %QEMU_CLOCK_REALTIME nanoseconds. Canceling the job will immediately
- * interrupt the wait.
- */
-void block_job_sleep_ns(BlockJob *job, int64_t ns);
-
-/**
*
diff --git a/include/qemu/job.h b/include/qemu/job.h
index 00c7cda9a3..ddfa824315 100644
--- a/include/qemu/job.h
+++ b/include/qemu/job.h
@@ -168,6 +168,13 @@ void job_enter_cond(Job *job, bool(*fn)(Job *job));
void job_start(Job *job);
/**
+ *
+ * Continue the specified job by entering the coroutine.
+ */
+void job_enter(Job *job);
+
Is this a holdout from #18?

Eh, either way.

Reviewed-by: John Snow <***@redhat.com>
Max Reitz
2018-05-14 23:11:57 UTC
Permalink
qmp_to_opts() used to be a method of QMPTestCase, but recently we
started to add more Python test cases that don't make use of
QMPTestCase. In order to make the method usable there, move it to VM.
---
tests/qemu-iotests/041 | 6 +++---
tests/qemu-iotests/155 | 2 +-
tests/qemu-iotests/iotests.py | 45 ++++++++++++++++++++++---------------------
3 files changed, 27 insertions(+), 26 deletions(-)
Reviewed-by: Max Reitz <***@redhat.com>
John Snow
2018-05-14 23:23:21 UTC
Permalink
While we already moved the state related to job pausing to Job, the
functions to do were still BlockJob only. This commit moves them over to
Job.
I'm slightly sad about the new callback, because it still seems silly to
have to specify something that's enforced with an assertion to be
ubiquitously true, but I think any other way of fixing that is probably
uglier.

So OK.

Reviewed-by: John Snow <***@redhat.com>
John Snow
2018-05-14 23:43:14 UTC
Permalink
Since we introduced an explicit status to block job, BlockJob.completed
is redundant because it can be derived from the status. Remove the field
from BlockJob and add a function to derive it from the status at the Job
level.
You're braver than I am.
---
include/block/blockjob.h | 3 ---
include/qemu/job.h | 3 +++
blockjob.c | 16 +++++++---------
job.c | 22 ++++++++++++++++++++++
qemu-img.c | 4 ++--
5 files changed, 34 insertions(+), 14 deletions(-)
diff --git a/include/block/blockjob.h b/include/block/blockjob.h
index ce136ff2ac..a2d16a700d 100644
--- a/include/block/blockjob.h
+++ b/include/block/blockjob.h
@@ -88,9 +88,6 @@ typedef struct BlockJob {
/** The opaque value that is passed to the completion function. */
void *opaque;
- /** True when job has reported completion by calling block_job_completed. */
- bool completed;
-
/** ret code passed to block_job_completed. */
int ret;
diff --git a/include/qemu/job.h b/include/qemu/job.h
index 0ba6cb3161..87b0500795 100644
--- a/include/qemu/job.h
+++ b/include/qemu/job.h
@@ -214,6 +214,9 @@ const char *job_type_str(Job *job);
/** Returns whether the job is scheduled for cancellation. */
bool job_is_cancelled(Job *job);
+/** Returns whether the job is in a completed state. */
+bool job_is_completed(Job *job);
+
/**
* job_resume(). If the job is supposed to be resumed by user action, call
diff --git a/blockjob.c b/blockjob.c
index a2b6bfc975..e0a51cfb3e 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -194,7 +194,7 @@ static void block_job_detach_aio_context(void *opaque)
job_pause(&job->job);
- while (!job->job.paused && !job->completed) {
+ while (!job->job.paused && !job_is_completed(&job->job)) {
block_job_drain(job);
}
@@ -270,7 +270,6 @@ const BlockJobDriver *block_job_driver(BlockJob *job)
static void block_job_decommission(BlockJob *job)
{
assert(job);
- job->completed = true;
Oops. I forgot I left this stuff in here. It's definitely safe to
remove, as you know :)
job->job.busy = false;
job->job.paused = false;
job->job.deferred_to_main_loop = true;
@@ -335,7 +334,7 @@ static void block_job_clean(BlockJob *job)
static int block_job_finalize_single(BlockJob *job)
{
- assert(job->completed);
+ assert(job_is_completed(&job->job));
/* Ensure abort is called for late-transactional failures */
block_job_update_rc(job);
@@ -428,10 +427,10 @@ static int block_job_finish_sync(BlockJob *job,
/* block_job_drain calls block_job_enter, and it should be enough to
* induce progress until the job completes or moves to the main thread.
*/
- while (!job->job.deferred_to_main_loop && !job->completed) {
+ while (!job->job.deferred_to_main_loop && !job_is_completed(&job->job)) {
block_job_drain(job);
}
- while (!job->completed) {
+ while (!job_is_completed(&job->job)) {
aio_poll(qemu_get_aio_context(), true);
}
ret = (job_is_cancelled(&job->job) && job->ret == 0)
@@ -472,7 +471,7 @@ static void block_job_completed_txn_abort(BlockJob *job)
while (!QLIST_EMPTY(&txn->jobs)) {
other_job = QLIST_FIRST(&txn->jobs);
ctx = blk_get_aio_context(other_job->blk);
- if (!other_job->completed) {
+ if (!job_is_completed(&other_job->job)) {
assert(job_is_cancelled(&other_job->job));
block_job_finish_sync(other_job, NULL, NULL);
}
@@ -514,7 +513,7 @@ static void block_job_completed_txn_success(BlockJob *job)
* txn.
*/
QLIST_FOREACH(other_job, &txn->jobs, txn_list) {
- if (!other_job->completed) {
+ if (!job_is_completed(&other_job->job)) {
return;
}
assert(other_job->ret == 0);
@@ -848,9 +847,8 @@ void block_job_early_fail(BlockJob *job)
void block_job_completed(BlockJob *job, int ret)
{
- assert(job && job->txn && !job->completed);
+ assert(job && job->txn && !job_is_completed(&job->job));
assert(blk_bs(job->blk)->job == job);
- job->completed = true;
job->ret = ret;
block_job_update_rc(job);
trace_block_job_completed(job, ret, job->ret);
diff --git a/job.c b/job.c
index 94ad01a51a..60ccb0640b 100644
--- a/job.c
+++ b/job.c
@@ -121,6 +121,28 @@ bool job_is_cancelled(Job *job)
return job->cancelled;
}
+bool job_is_completed(Job *job)
+{
+ switch (job->status) {
+ return false;
Well, I'd argue that NULL shouldn't have state that it can answer with
one way or the other, but sure. In practice it ought not matter.
+ return true;
+ g_assert_not_reached();
+ }
+ return false;
+}
+
bool job_started(Job *job)
{
return job->co;
diff --git a/qemu-img.c b/qemu-img.c
index 82f69269ae..6a2431a653 100644
--- a/qemu-img.c
+++ b/qemu-img.c
@@ -867,9 +867,9 @@ static void run_block_job(BlockJob *job, Error **errp)
aio_poll(aio_context, true);
qemu_progress_print(job->len ?
((float)job->offset / job->len * 100.f) : 0.0f, 0);
- } while (!job->ready && !job->completed);
+ } while (!job->ready && !job_is_completed(&job->job));
- if (!job->completed) {
+ if (!job_is_completed(&job->job)) {
ret = block_job_complete_sync(job, errp);
} else {
ret = job->ret;
Max Reitz
2018-05-14 23:44:41 UTC
Permalink
This adds a test case that tests the new job-* QMP commands with
mirror and backup block jobs.
---
tests/qemu-iotests/219 | 201 ++++++++++++++++++++++++++++
tests/qemu-iotests/219.out | 327 +++++++++++++++++++++++++++++++++++++++++++++
tests/qemu-iotests/group | 1 +
3 files changed, 529 insertions(+)
create mode 100755 tests/qemu-iotests/219
create mode 100644 tests/qemu-iotests/219.out
Test looks good, but it fails (for me) on tmpfs because at the point of
the first query-jobs, they already have an offset of 65536.
diff --git a/tests/qemu-iotests/219 b/tests/qemu-iotests/219
new file mode 100755
index 0000000000..6cfe54b4db
--- /dev/null
+++ b/tests/qemu-iotests/219
@@ -0,0 +1,201 @@
[...]
+with iotests.FilePath('disk.img') as disk_path, \
+ iotests.FilePath('copy.img') as copy_path, \
+
+ img_size = '4M'
+ iotests.qemu_img_create('-f', iotests.imgfmt, disk_path, img_size)
+ iotests.qemu_io('-c', 'write 0 %s' % (img_size),
+ '-f', iotests.imgfmt, disk_path)
+
+ iotests.log('Launching VM...')
+ vm.add_blockdev(vm.qmp_to_opts({
+ 'driver': iotests.imgfmt,
+ 'node-name': 'drive0-node',
+ 'file': {
+ 'driver': 'file',
+ 'filename': disk_path,
+ },
+ }))
+ vm.launch()
+
+ # In order to keep things deterministic (especially progress in query-job,
+ # but related to this also automatic state transitions like job
+ # completion), but still get pause points often enough to avoid making this
+ # test veey slow, it's important to have the right ratio between speed and
s/veey/very/

(Although "veey" does have its charm.)
+ # buf_size.
+ #
+ # For backup, buf_size is hard-coded to the source image cluser size (64k),
s/cluser/cluster/
+ # so we'll pick the same for mirror. The slice time, i.e. the granularity
+ # of the rate limiting is 100ms. With a speed of 256k per second, we can
+ # get four pause points per second. This gives us 250ms per iteration,
+ # which should be enough to stay deterministic.
+
+ test_job_lifecycle(vm, 'drive-mirror', has_ready=True, job_args={
+ 'device': 'drive0-node',
+ 'target': copy_path,
+ 'sync': 'full',
+ 'speed': 262144,
+ 'buf_size': 65536,
+ })
+
+ test_job_lifecycle(vm, 'drive-backup', job_args={
+ 'device': 'drive0-node',
+ 'target': copy_path,
+ 'sync': 'full',
+ 'speed': 262144,
+ 'auto-finalize': auto_finalize,
+ 'auto-dismiss': auto_dismiss,
+ })
+
+ vm.shutdown()
diff --git a/tests/qemu-iotests/219.out b/tests/qemu-iotests/219.out
new file mode 100644
index 0000000000..e244be9ce8
--- /dev/null
+++ b/tests/qemu-iotests/219.out
@@ -0,0 +1,327 @@
[...]
+Pause/resume in READY
+=== Testing block-job-pause/block-job-resume ===
+{u'return': {}}
+{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'standby', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
+{u'return': [{u'status': u'standby', u'current-progress': 4194304, u'total-progress': 4194304, u'id': u'job0', u'type': u'mirror'}]}
+{u'return': {}}
+{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'ready', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
+{u'return': [{u'status': u'ready', u'current-progress': 4194304, u'total-progress': 4194304, u'id': u'job0', u'type': u'mirror'}]}
+=== Testing block-job-pause/job-resume ===
+{u'return': {}}
+{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'standby', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
+{u'return': [{u'status': u'standby', u'current-progress': 4194304, u'total-progress': 4194304, u'id': u'job0', u'type': u'mirror'}]}
+{u'return': {}}
+{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'ready', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
+{u'return': [{u'status': u'ready', u'current-progress': 4194304, u'total-progress': 4194304, u'id': u'job0', u'type': u'mirror'}]}
+=== Testing job-pause/block-job-resume ===
+{u'return': {}}
+{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'standby', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
+{u'return': [{u'status': u'standby', u'current-progress': 4194304, u'total-progress': 4194304, u'id': u'job0', u'type': u'mirror'}]}
+{u'return': {}}
+{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'ready', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
+{u'return': [{u'status': u'ready', u'current-progress': 4194304, u'total-progress': 4194304, u'id': u'job0', u'type': u'mirror'}]}
+=== Testing job-pause/job-resume ===
+{u'return': {}}
+{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'standby', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
+{u'return': [{u'status': u'standby', u'current-progress': 4194304, u'total-progress': 4194304, u'id': u'job0', u'type': u'mirror'}]}
+{u'return': {}}
+{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'ready', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
+{u'return': [{u'status': u'ready', u'current-progress': 4194304, u'total-progress': 4194304, u'id': u'job0', u'type': u'mirror'}]}
This is really, really mean. Don't you have any compassion with the
poor little job that just wants to have Feierabend?

It worked so hard and it's always on standby and instantly ready when
you need it. Yet, you keep it hanging. That's not nice.

Max
+{u'error': {u'class': u'GenericError', u'desc': u"Job 'job0' in state 'ready' cannot accept command verb 'finalize'"}}
+{u'error': {u'class': u'GenericError', u'desc': u"Job 'job0' in state 'ready' cannot accept command verb 'dismiss'"}}
+{u'error': {u'class': u'GenericError', u'desc': u"Job 'job0' in state 'ready' cannot accept command verb 'finalize'"}}
+{u'error': {u'class': u'GenericError', u'desc': u"Job 'job0' in state 'ready' cannot accept command verb 'dismiss'"}}
+{u'return': {}}
+
+Waiting for PENDING state...
+{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'waiting', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
+{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'pending', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
+{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'concluded', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
+{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'null', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
+{u'return': []}
Kevin Wolf
2018-05-15 14:15:18 UTC
Permalink
Before we can make x-blockdev-create a background job, we need to
generalise the job infrastructure so that it can be used without any
associated block node.
This series extracts a Job object from the block job infrastructure,
which should contain everything related to jobs that doesn't require the
block layer to be involved.
Job creation with a centralised job-create command (if we even want
this) as well as the actual conversion of x-blockdev-create to Job is
left for another series. With 42 patches, the series must be perfect and
I can't add any more patches.
Thanks for the review, applied patches 1-6 in the hope of making v2 at
least a little bit shorter. :-)

Kevin
Eric Blake
2018-05-15 14:43:10 UTC
Permalink
Post by Kevin Wolf
Before we can make x-blockdev-create a background job, we need to
generalise the job infrastructure so that it can be used without any
associated block node.
This series extracts a Job object from the block job infrastructure,
which should contain everything related to jobs that doesn't require the
block layer to be involved.
Job creation with a centralised job-create command (if we even want
this) as well as the actual conversion of x-blockdev-create to Job is
left for another series. With 42 patches, the series must be perfect and
I can't add any more patches.
Thanks for the review, applied patches 1-6 in the hope of making v2 at
least a little bit shorter. :-)
Or now you can split/add patches until you're back up to 42 patches in v2 ;)
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
Eric Blake
2018-05-16 17:54:15 UTC
Permalink
This moves freeing the Job object and its fields from block_job_unref()
to job_delete().
---
include/qemu/job.h | 3 +++
blockjob.c | 3 +--
job.c | 6 ++++++
3 files changed, 10 insertions(+), 2 deletions(-)
+
+void job_delete(Job *job)
+{
+ g_free(job->id);
+ g_free(job);
+}
Should this be free-like, acting as a no-op when job == NULL on input?
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
Eric Blake
2018-05-16 18:11:28 UTC
Permalink
This moves BlockJob.status and the closely related functions
(block_)job_state_transition() and (block_)job_apply_verb to Job. The
two QAPI enums are renamed to JobStatus and JobVerb.
---
qapi/block-core.json | 14 +++----
include/block/blockjob.h | 3 --
include/qemu/job.h | 13 ++++++
blockjob.c | 102 +++++++++++------------------------------------
job.c | 56 ++++++++++++++++++++++++++
tests/test-blockjob.c | 39 +++++++++---------
block/trace-events | 2 -
trace-events | 4 ++
8 files changed, 122 insertions(+), 111 deletions(-)
@@ -90,4 +93,14 @@ Job *job_next(Job *job);
*/
Job *job_get(const char *id);
+/**
+ * Returns 0 if the verb can be applied; otherwise errp is set and -EPERM
+ * returned.
+ */
+int job_apply_verb(Job *job, JobVerb bv, Error **errp);
Why 'JobVerb bv' rather than 'jv' or 'verb'?
@@ -968,7 +913,6 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
job->refcnt = 1;
job->auto_finalize = !(flags & BLOCK_JOB_MANUAL_FINALIZE);
job->auto_dismiss = !(flags & BLOCK_JOB_MANUAL_DISMISS);
- block_job_state_transition(job, BLOCK_JOB_STATUS_CREATED);
No replacement to the counterpart job_state_transition() here because
that should be handled in the parent class now, right? [1]

...
+
+int job_apply_verb(Job *job, JobVerb bv, Error **errp)
Again, the name bv made sense for BlockJobVerb, but less so now.
@@ -81,6 +135,8 @@ void *job_create(const char *job_id, const JobDriver *driver, Error **errp)
job->driver = driver;
job->id = g_strdup(job_id);
+ job_state_transition(job, JOB_STATUS_CREATED);
+
[1] Yes, just took me a while to get to it.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
Eric Blake
2018-05-16 18:17:45 UTC
Permalink
This moves reference counting from BlockJob to Job.
In order to keep calling the BlockJob cleanup code when the job is
deleted via job_unref(), introduce a new JobDriver.free callback. Every
block job must use block_job_free() for this callback, this is asserted
in block_job_create().
---
+++ b/job.c
+
+void job_unref(Job *job)
+{
+ if (--job->refcnt == 0) {
Should this be free()-like and allow an incoming job == NULL as a no-op?
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
Eric Blake
2018-05-16 18:37:50 UTC
Permalink
---
Rather sparse on the commit message, given the other comments in this
thread.
+++ b/include/qemu/job.h
@@ -58,6 +58,9 @@ typedef struct Job {
*/
bool cancelled;
+ /** Set to true when the job has deferred work to the main loop. */
+ bool deferred_to_main_loop;
+
/** Element of the list of jobs */
QLIST_ENTRY(Job) job_list;
} Job;
@@ -131,6 +134,23 @@ Job *job_get(const char *id);
*/
int job_apply_verb(Job *job, JobVerb bv, Error **errp);
+typedef void JobDeferToMainLoopFn(Job *job, void *opaque);
+
+/**
+ *
+ * This function must be called by the main job coroutine just before it
+ *
+ * Block jobs must call bdrv_unref(), bdrv_close(), and anything that uses
+ * bdrv_drain_all() in the main loop.
Do we still want this block-job-specific comment in the main job.h header?
+ *
+ */
+void job_defer_to_main_loop(Job *job, JobDeferToMainLoopFn *fn, void *opaque);
+
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
Eric Blake
2018-05-16 18:03:31 UTC
Permalink
This moves the job list from BlockJob to Job. Now we can check for
duplicate IDs in job_create().
---
include/block/blockjob.h | 3 ---
include/qemu/job.h | 19 +++++++++++++++++++
blockjob.c | 47 +++++++++++++++++++++++++----------------------
job.c | 31 +++++++++++++++++++++++++++++++
4 files changed, 75 insertions(+), 25 deletions(-)
@@ -71,4 +75,19 @@ JobType job_type(Job *job);
/** Returns the enum string for the JobType of a given Job. */
const char *job_type_str(Job *job);
+/**
+ *
+ * Returns the requested job, or %NULL if there are no more jobs left.
+ */
+Job *job_next(Job *job);
The new code is supposed to allow you to prime the search by passing in
NULL. This looks similar to the semantics originally held by
block_job_next(), where
-BlockJob *block_job_next(BlockJob *job)
+static bool is_block_job(Job *job)
{
- if (!job) {
- return QLIST_FIRST(&block_jobs);
- }
the old code did so by special-casing an incoming NULL.
- return QLIST_NEXT(job, job_list);
+ return job_type(job) == JOB_TYPE_BACKUP ||
+ job_type(job) == JOB_TYPE_COMMIT ||
+ job_type(job) == JOB_TYPE_MIRROR ||
+ job_type(job) == JOB_TYPE_STREAM;
+}
+
+BlockJob *block_job_next(BlockJob *bjob)
+{
+ Job *job = &bjob->job;
But the new code blindly dereferences what may be a NULL bjob. Taking
the address of a member of the NULL pointer may happen to work if that
member is at offset 0, but is also likely to trigger compiler or
sanitizer warnings; and does not work if the member is not at offset 0.

I think all you have to do is:

Job *job = bjob ? &bjob->job : NULL;
+
+ do {
+ job = job_next(job);
+ } while (job && !is_block_job(job));
+
+
+ return job ? container_of(job, BlockJob, job) : NULL;
Why two blank lines?
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
Eric Blake
2018-05-16 19:26:52 UTC
Permalink
This adds a QMP event that is emitted whenever a job transitions from
one status to another. For the event, a new qapi/job.json schema file is
created which will contain all job-related definitions that aren't tied
to the block layer.
---
+++ b/qapi/job.json
+##
+#
+# Emitted when a job transitions to a different status.
+#
+#
+# Since: 2.13
+##
+{ 'event': 'JOB_STATUS_CHANGE',
+ 'data': { 'id': 'str',
+ 'status': 'JobStatus' } }
Is it worth also trying to list the old state that the transition came
from? But that's new compared to what block jobs are currently doing, so
if we can't come up with a strong reason to add that, I'm okay.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
Eric Blake
2018-05-16 19:13:49 UTC
Permalink
This renames the BlockJobCreateFlags constants, moves a few JOB_INTERNAL
checks to job_create() and the auto_{finalize,dismiss} fields from
BlockJob to Job.
Do we still want to allow auto-finalize on all jobs, or should we keep
it just for legacy support on block jobs? Even more so for auto-dismiss
(if you use the legacy interface, that's what you expect to happen; but
other than for legacy block jobs, any sane management app is going to
request auto-dismiss be false, so why expose it to generic jobs?)

Of course, it may still be easiest to plumb up auto- actions in the
internal code (where it has to work to keep legacy block jobs from
breaking, but no new callers have to enable it), so my argument may not
apply until you actually expose a QMP interface for generic jobs.
+++ b/block/mirror.c
@@ -1282,7 +1282,7 @@ void mirror_start(const char *job_id, BlockDriverState *bs,
}
is_none_mode = mode == MIRROR_SYNC_MODE_NONE;
base = mode == MIRROR_SYNC_MODE_TOP ? backing_bs(bs) : NULL;
- mirror_start_job(job_id, bs, BLOCK_JOB_DEFAULT, target, replaces,
+ mirror_start_job(job_id, bs, JOB_DEFAULT, target, replaces,
speed, granularity, buf_size, backing_mode,
on_source_error, on_target_error, unmap, NULL, NULL,
&mirror_job_driver, is_none_mode, base, false,
Just an observation that this is a lot of parameters; would using boxed
QAPI types make these calls any more obvious? But that's a separate
cleanup.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
Eric Blake
2018-05-16 19:15:21 UTC
Permalink
block_job_event_pending() doesn't only send a QMP event, but it also
transitions to the PENDING state. Split the function so that we get one
part only sending the event (like other block_job_event_* functions) and
another part than does the state transition.
s/than/that/
---
blockjob.c | 27 ++++++++++++++++++---------
1 file changed, 18 insertions(+), 9 deletions(-)
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
Loading...