Nobody,
Post by NobodyYou don't have to like it, but if you want to use it, you
have to make an effort to understand how it works
What did you think I was trying to do ?
Post by Nobodyrather than making assumptions which aren't actually true.
... and which assumptions would that be ?
As far as I can tell I did mention facts, and than mentioned why I do not
understand why it works/is done that way, as it creates problems.
Post by NobodyYour problems stem from making incorrect assumptions,
not from the the method.
Than tell me what those "wrong assumptions" are please. You've hinted at
that a few times, but I've not seen any specifics (and neither does the
current post mention them).
But a remark:
I've been googeling quite a bit to try to find out whats going on and how to
fix it, and one of the things I found was that shifting the whole grid by
half a pixel horizontally as well as vertically (which I now have mentioned
a few times before, but not have seen you respond to yet. Why ?) would
certainly work for lines, but than would create problems for polygons.
In short: that "method" you are referring to seems to be made up from at
least two, which do not even agree with each other ... :-\
Post by NobodyAs I said before: a pixel isn't a point, it's a rectangle.
A pixel doesn't have a single, specific location.
....
Post by NobodyThe centre of the bottom-left pixel is at (0.5,0.5) in window coordinates.
Thank you for that explanation. It also indicates a problem when the term
"pixel" is used, as lthe ast time you metioned "rectangular pixels" I
assumed, from the context, that you ment OpenGLs virtual ones, instead of
the screens physical ones.
But a question: If I would use the exact same explanation, but define the
origin of the screen as the center* of the top-left pixel, with it ranging
from -1/2 pixel to +1/2 pixel, would that make the explanation invalid ? If
so, why ? If not ....
* The position of a lamp (or most any single-point lightsource) is normally
defined as its center, not somewhere on its outside. Why would it be
different for a pixel ?
I mean, if its only a problem of definitions than thats easy to solve. :-)
Post by Nobodyis that in the case where two polygons share a common edge,
any pixel along that edge will belong to exactly one of the two
polygons.
I think you've here mentioned the reason why a quad drawn ontop of a
lineloop (ofcourse using the same coordinates for both) does not fully
overlap the lineloop: because its *forced* to stop one pixel short of its
left/bottom end-coordinates, so it will not overlap an eventual next one.
A good choice, but not mentioned anywhere and as such not expected. :-\
I assumed that the above non-overlap was an effect of the problem I was/am
busy with, the non-alignment of the virtual and physical grid, and only
after a bout of googleing I realized that that might not be the case.
In other words: one problem solved (but more to go). :-)
Post by Nobody2. The reason for using the pixel's centre is that it is unbiased.
Using any other location would result in rasterised polygons
exhibiting a net shift whose magnitude depends upon the raster
resolution.
Ehrmmm ... Although I think I understand what problem you are indicating
here, wasn't the problem not that OpenGL is *not* using the pixels center ?
Post by Nobodyfor which the line intersects a diamond inscribed within the pixel
Yeah, I found that diamond too. Though I have to say that I do not quite
see how it, in a basic Ortho projection, would affect a single-pixel width
line drawn from the center of a physical pixel to the center of another
physical pixel (regardlesss of which coordinates are used). The endpoints
would *always* be drawn where they where indicated. Curently I cannot be
sure of that ...
Post by Nobodybecause unless x is a power of two, x*(1/x) typically won't
be exactly 1.0 when using floating-point arithmetic.
And thats *exactly* why an OpenGL pixel should *not* be placed, when using a
basic ortho projection and integer coordinates, on the border of two
physical ones.
Post by NobodyThe overall error is sufficiently small that it shouldn't matter
in practice ...
Guess again: Thats how this thread started (read the subject line).
Post by Nobodyunless you've managed to construct a case where you're
consistently hitting the discontinuity in the rounding function.
Yeah, funny that: I'm using the *most basic* of setups (ortho projection
matching the viewport size, integer coordinates for any used vertex), and
I'm already able to construct that case.
Let me think about it ..... Nope, I get the strong, almost definite feeling
you're bullshitting me here. Sorry.
Post by NobodyIn short, this can be summarised with an allegory: a man goes
to the doctor, and says "Doctor, when I do this ... it hurts"; to
which the doctor replies "So stop doing it!".
Well, in my country we use that one as a joke, indicating the non-usability
of such an answer (which ignores the problem, but only wants to get rid of
the symptoms). I'm not at all sure if you ment it that way, but I think
its quite fitting. :-(
Thanks for your attempt to help me though.
Regards,
Rudy Wieser
Post by NobodyPost by R.WieserPost by Nobody*especially* to someone who is determined to ignore anything which they
don't want to hear.
I'm sorry, but all I hear from you is "you have to like what is as it is",
You don't have to like it, but if you want to use it, you have to make an
effort to understand how it works, rather than making assumptions which
aren't actually true.
Post by R.Wieserwith *absolutily no explanation* why the, obviously creating problems,
method is so good.
It's not an "obviously creating problems method". Your problems stem from
making incorrect assumptions, not from the the method.
Post by R.WieserWhy is, especially in ortho mode (where the Othto projections width and
height matches the one of the viewport and thus the rectangle of pixels on
the screen), an OpenGL (virtual) pixel placed 1) anywhere else than on the
screen pixel 2) on the most awkward of mathematical places, where the
slightest of rounding errors causes on-screen changes ?
As I said before: a pixel isn't a point, it's a rectangle. A pixel
doesn't have a single, specific location.
To simplify matters, we can forget all about projections, and deal
directly in window coordinates. Let w and h be the width and height
(respectively) of the window in pixels.
The bottom-left corner of the bottom-left pixel is at the bottom-left
corner of the window, which is (0,0) in window coordinates.
Similarly, the top-right corner of the top-right pixel is at the top-right
corner of the window, which is (w,h) in window coordinates.
The top-right corner of the bottom-left pixel is at (1,1) in window
coordinates.
The centre of the bottom-left pixel is at (0.5,0.5) in window coordinates.
In short, window coordinates map a rectangular region of the screen to a
rectangular region of the Euclidean plane. That mapping is continuous
(i.e. defined over the reals, not the integers). Vertices (and other
positions, e.g. the raster position used for bitmap operations) are not
constrained to integer coordinates.
In theory, a line or polygon is an infinite set of points, a subset of
R^2. But a finite set of pixels cannot exactly represent those, so
rasterisation has to produce an approximation which attempts to minimise
the difference between the ideal and the achievable.
In the absence of anti-aliasing, rasterising a filled polygon affects
exactly those pixels whose centres lie within the polygon (i.e. within the
convex hull of the polygon's vertices).
1. The reason for testing whether a specific point lies within the polygon
(rather than e.g. whether *any* part of the pixel lies inside the polygon)
is that in the case where two polygons share a common edge, any pixel
along that edge will belong to exactly one of the two polygons. This is of
fundamental importance when using read-modify-write operations such as
stencilling or glLogicOp(GL_XOR).
2. The reason for using the pixel's centre is that it is unbiased. Using
any other location would result in rasterised polygons exhibiting a net
shift whose magnitude depends upon the raster resolution.
In the absence of anti-aliasing, rasterising a line segment of width one
affects exactly those pixels for which the line intersects a diamond
inscribed within the pixel, i.e. some point (x,y) on the line segment
satisfies the constraint |x-xc|+|y-yc|<0.5 where (xc,yc) is the
coordinates of the pixel centre.
The main reasons for the diamond rule are a) that it can be efficiently
implemented in hardware, and b) that it guarantees that lines do not
contain gaps.
And again, the reason for using the pixel's centre is that it is unbiased.
On average, half of the affected pixels will lie on each side of the line.
This all works fine in practice, except for one specific case: when you
draw a line which is either exactly horizontal (i.e. every point on the
line has the same Y coordinate) or exactly vertical (i.e. every point on the
line has the same X coordinate), and the constant coordinate (X or Y)
happens to be exactly mid-way between pixel centres (i.e. the coordinate
is an integer).
In that case, the fact that it's unbiased on average doesn't help because
the fact that the perpendicular coordinate (X for a vertical line, Y for a
horizontal line) is constant, combined with the deterministic nature of
computer arithmetic, means that all of the pixels will end up falling on
the same side of the line.
Almost anything which involves rounding has pathological cases, and for
the rasterisation algorithm used by OpenGL (and almost everything else),
vertical or horizontal lines with integer window coordinates are the
pathological case. For filled polygons, vertical or horizontal edges whose
coordinates are an integer plus 0.5 are the pathological case.
Adding in transformations makes matters slightly worse. If you set an
orthographic projection which uses the window's dimensions in pixels, the
combination of the (user-defined) orthographic projection and the
(built-in) viewport transformation should theoretically be an identity
transformation. But in practice it will be slightly off because unless x
is a power of two, x*(1/x) typically won't be exactly 1.0 when using
floating-point arithmetic.
The overall error is sufficiently small that it shouldn't matter in
practice ... unless you've managed to construct a case where you're
consistently hitting the discontinuity in the rounding function.
In short, this can be summarised with an allegory: a man goes to the
doctor, and says "Doctor, when I do this ... it hurts"; to which the
doctor replies "So stop doing it!".