Discussion:
text input and key down events
Rainer Deyke
2011-02-20 07:42:04 UTC
Permalink
I have a GUI program that listens for both key down and text input
events. The intended behavior is that each key is processed exactly
once, by exactly one GUI widget. However, when the user presses the the
'1' key (or any other text input key), it can be processed twice: once
as a text input event by a text input widget and once as a shortcut key
by another widget. The text input widget cannot consume the key down
event because it can't match the key down event to the corresponding
text input event.

What is the recommended way to handle this? Is there a way to detect if
a particular key down event is used for text input so that I can
suppress these events while listening for text input?
--
Rainer Deyke - ***@eldwood.com
Rainer Deyke
2011-02-21 20:15:34 UTC
Permalink
Post by Rainer Deyke
I have a GUI program that listens for both key down and text input
events. The intended behavior is that each key is processed exactly
once, by exactly one GUI widget. However, when the user presses the the
'1' key (or any other text input key), it can be processed twice: once
as a text input event by a text input widget and once as a shortcut key
by another widget. The text input widget cannot consume the key down
event because it can't match the key down event to the corresponding
text input event.
What is the recommended way to handle this? Is there a way to detect if
a particular key down event is used for text input so that I can
suppress these events while listening for text input?
As a temporary hack, I am setting the text input widget to consume all
key down events where the keysym is in unicode range (i.e. 30th bit not
set), unless either ctrl or alt is pressed. This seems to work.
--
Rainer Deyke - ***@eldwood.com
Jeff Post
2011-02-21 21:38:05 UTC
Permalink
Post by Rainer Deyke
Post by Rainer Deyke
I have a GUI program that listens for both key down and text input
events. The intended behavior is that each key is processed exactly
once, by exactly one GUI widget. However, when the user presses the the
'1' key (or any other text input key), it can be processed twice: once
as a text input event by a text input widget and once as a shortcut key
by another widget. The text input widget cannot consume the key down
event because it can't match the key down event to the corresponding
text input event.
What is the recommended way to handle this? Is there a way to detect if
a particular key down event is used for text input so that I can
suppress these events while listening for text input?
As a temporary hack, I am setting the text input widget to consume all
key down events where the keysym is in unicode range (i.e. 30th bit not
set), unless either ctrl or alt is pressed. This seems to work.
Why is that a hack?

The way I handle it is to define a general widget type (structure or class,
depending on language). Then each widget (text input, text output, file
selector, etc) attaches callbacks for the types of events they need to
handle. Events are processed by passing them to a processCallBack function
which runs through the widget list and passes the event to the callback
function for the topmost widget in the list that has registered a callback
function for that type of event. The widget that processes the event then
becomes the topmost widget.

The one exception is for an alert or error widget. If one is active, it is the
topmost widget and all events except for SDLQuit are passed to it. That just
means that nothing else will get done until the user responds to the error
widget.

If you think that might be useful for your application, contact me off list
and I'll send you the code (specify whether you prefer .zip or .tar.gz
format).

Jeff
john skaller
2011-02-22 01:37:18 UTC
Permalink
Post by Jeff Post
The way I handle it is to define a general widget type (structure or class,
depending on language). Then each widget (text input, text output, file
selector, etc) attaches callbacks for the types of events they need to
handle. Events are processed by passing them to a processCallBack function
which runs through the widget list and passes the event to the callback
function for the topmost widget in the list that has registered a callback
function for that type of event. The widget that processes the event then
becomes the topmost widget.
That's quite a sane algorithm, although it has one problem: callbacks.
Hard to avoid in C.

Callbacks suck because you lose the stack: your per-widget code becomes
a slave of the event loop.

There are two ways around this. One is to use threads. This also sucks
because it is overkill (consumes resources).

The other is to use a better language :) I think (not sure) Go has channels
and fibres but allow me to show you the code I use in Felix:

proc dispatch_event(
keyboard:schannel[SDL_keysym],
active:schannel[SDL_ActiveEvent],
resize:schannel[SDL_ResizeEvent]
)
{
whilst true do
var e : SDL_Event;
poll_event(&e);
match get_type e with
| ?et when et == SDL_ACTIVEEVENT=>
{ write (active, e.active); }

| ?et when et == SDL_VIDEORESIZE=>
{ write (resize, e.resize); }

| ?et when et == SDL_KEYDOWN=>
{ write (keyboard, e.key.keysym); }

| ?et when et == SDL_QUIT=>
{ Quit 0; }

| _ => {}
endmatch;
done;
}

Notice that this code does not **seem** to be invoking a callback.
It calls a routine to get an event and dispatches the event down
one of two channels.

Here's is the resizing code:

proc resizechan(x:schannel[SDL_ResizeEvent])
{
whilst true do
handle_resize$ read x;
done;
}

The program units are modular, they all have a stack with
local variables (or so it seems ..). They have the look and
feel of threads.

But they're not. The execution model of the underlying code
is callbacks. (Actually C++ classes with resume() method).
The compiler control-inverts the code, that is it turns it inside
out: it takes code that "thinks" it is the master and turns it into
slave code.

It's a great pity some of the API's SDL uses are forced to be callback
driven (Audio I believe). The really big advantage of SDL is that
it is a library NOT a framework that forces you to do everything with
callbacks.

BTW: to understand how important control inversion is, think about parsing
a file with a subroutine called with a single character at a time, as opposed to
reading the data. When you read, you're the master. When you're called with
the data you're a slave.

When you write C you're using most important control-inversion
function on your computer: the operating system. Actual data from
your disk drive or network is delivered asynchronously and the OS
control inverts so the client application reads it under application
control: the application *thinks* it is the master.


--
john skaller
***@users.sourceforge.net
Kenneth Bull
2011-02-22 01:59:48 UTC
Permalink
Post by john skaller
The other is to use a better language :) I think (not sure) Go has channels
Go has goroutines, which are like Windows fibres in that they are
application controlled rather than like threads which are OS scheduler
controlled.
http://golang.org/doc/go_spec.html#Go_statements

A channel is like a typed pipe. It can be used to send and receive
data between goroutines.
http://golang.org/doc/go_spec.html#Channel_types

You can find more documentation for Go here:
http://golang.org/doc/docs.html
Jeff Post
2011-02-22 03:51:36 UTC
Permalink
Post by john skaller
That's quite a sane algorithm, although it has one problem: callbacks.
Hard to avoid in C.
Callbacks suck because you lose the stack: your per-widget code becomes
a slave of the event loop.
Not a problem in the applications I write. Don't know about games though since
I don't write games.
Post by john skaller
The other is to use a better language :) I think (not sure) Go has channels
Since I don't know Felix I can't comment on your code (to me, Felix is a cat
-- ouch! I guess I'm showing my age ;-) Thanks anyway for providing the
example.
Post by john skaller
Notice that this code does not **seem** to be invoking a callback.
The program units are modular, they all have a stack with
local variables (or so it seems ..). They have the look and
feel of threads.
But they're not. The execution model of the underlying code
is callbacks.
Uh, okay. Then I fail to see the difference.
Post by john skaller
It's a great pity some of the API's SDL uses are forced to be callback
driven (Audio I believe). The really big advantage of SDL is that
it is a library NOT a framework that forces you to do everything with
callbacks.
That's one of many things I like about SDL.
Post by john skaller
BTW: to understand how important control inversion is, think about parsing
a file with a subroutine called with a single character at a time, as
opposed to reading the data. When you read, you're the master. When you're
called with the data you're a slave.
Funny you should mention that. My latest application does read a file one
character at a time because it needs to parse files written on Linux (LF only
newline), Windows (CR/LF newline), and Mac (CR only newline). Whether it does
so as master or slave is not relevant to the application.

Jeff
john skaller
2011-02-22 13:41:58 UTC
Permalink
Post by Jeff Post
Post by john skaller
That's quite a sane algorithm, although it has one problem: callbacks.
Hard to avoid in C.
Callbacks suck because you lose the stack: your per-widget code becomes
a slave of the event loop.
Not a problem in the applications I write. Don't know about games though since
I don't write games.
i don't mean to be offensive but ..

"I am fine with assembler I don't need high level languages"

"I think gotos are just fine, I don't need block structured programming languages"

"Procedural code is fine, I don't need functional programming"

"I am happy with object orientation".

"I am fine with callbacks"

Its all the same. You don't understand why what you're doing is bad, you're
used to it and you think it is ok.

It isn't OK.
Post by Jeff Post
Post by john skaller
The other is to use a better language :) I think (not sure) Go has channels
Since I don't know Felix I can't comment on your code (to me, Felix is a cat
-- ouch! I guess I'm showing my age ;-)
Felix is indeed a cat. -- ouch I guess I'm showing mine :)
[If Guido can name his language after a snake I can use a cat :]
Post by Jeff Post
Post by john skaller
But they're not. The execution model of the underlying code
is callbacks.
Uh, okay. Then I fail to see the difference.
You can't see the difference between C and machine code?

It's called automation: the compiler (in both cases) does a lot
of tedious housework for you and gets it right every time (hopefully :)
Post by Jeff Post
Post by john skaller
BTW: to understand how important control inversion is, think about parsing
a file with a subroutine called with a single character at a time, as
opposed to reading the data. When you read, you're the master. When you're
called with the data you're a slave.
Funny you should mention that. My latest application does read a file one
character at a time because it needs to parse files written on Linux (LF only
newline), Windows (CR/LF newline), and Mac (CR only newline). Whether it does
so as master or slave is not relevant to the application.
The issue isn't whether you read the file one character at a time, but whether you read
the character or are called with it.

The differences is very relevant to the complexity of the code you write.
For example analysing expressions requires recursion. To do recursion you must
have a stack.

You can either use the machine stack, or you can EMULATE a stack, but you can't
do it without a stack.

Emulating a stack is more work, and therefore more error prone, than using
one integrated into the programming language. But you can't use the machine
stack if your code is a slave because the only way to get the next character
is to return control. If your code is a master you can read the next character
anywhere in the code, even inside a deeply nested recursive context.

--
john skaller
***@users.sourceforge.net
Mason Wheeler
2011-02-22 14:02:04 UTC
Permalink
OK, this isn't really my thread, but I've been watching it, and the more you
say,
the more confused I become. And this last post seems to have devolved into
pure gibberish.
----- Original Message ----
Subject: Re: [SDL] text input and key down events
Post by Jeff Post
Post by john skaller
That's quite a sane algorithm, although it has one problem: callbacks.
Hard to avoid in C.
Callbacks suck because you lose the stack: your per-widget code becomes
a slave of the event loop.
Not a problem in the applications I write. Don't know about games though since
I don't write games.
i don't mean to be offensive but ..
"I am fine with assembler I don't need high level languages"
"I think gotos are just fine, I don't need block structured programming languages"
"Procedural code is fine, I don't need functional programming"
"I am happy with object orientation".
"I am fine with callbacks"
Its all the same. You don't understand why what you're doing is bad, you're
used to it and you think it is ok.
It isn't OK.
So then, if I'm parsing your viewpoint correctly, "OOP = bad, functional
programming = good, callbacks = bad"?

...huh?

You do know that callbacks (AKA the use of higher-order functions) are one of
the most fundamental aspects of functional programming, don't you?
Post by Jeff Post
Post by john skaller
BTW: to understand how important control inversion is, think about parsing
a file with a subroutine called with a single character at a time, as
opposed to reading the data. When you read, you're the master. When you're
called with the data you're a slave.
Funny you should mention that. My latest application does read a file one
character at a time because it needs to parse files written on Linux (LF only
newline), Windows (CR/LF newline), and Mac (CR only newline). Whether it does
so as master or slave is not relevant to the application.
The issue isn't whether you read the file one character at a time, but whether you read
the character or are called with it.
The differences is very relevant to the complexity of the code you write.
For example analysing expressions requires recursion. To do recursion you must
have a stack.
You can either use the machine stack, or you can EMULATE a stack, but you can't
do it without a stack.
Emulating a stack is more work, and therefore more error prone, than using
one integrated into the programming language. But you can't use the machine
stack if your code is a slave because the only way to get the next character
is to return control. If your code is a master you can read the next character
anywhere in the code, even inside a deeply nested recursive context.
That depends entirely on the complexity of the grammar you're using. For a
lexer
(tokenizer,) being called one character at a time is perfectly reasonable, if a
bit
slower than optimal, and no stack is required. For a parser, yes, you're right,
you
should have a stream of tokens available that you can ask for the next token
when
you need it, but it doesn't sound like he's actually describing a recursive
parser.

There's nothing inherently bad about callbacks. For a lot of things, they
greatly
simplify the work you need to do. For others, they're not appropriate for the
task,
so you use some other technique instead.
john skaller
2011-02-22 14:50:15 UTC
Permalink
Post by Mason Wheeler
So then, if I'm parsing your viewpoint correctly, "OOP = bad, functional
programming = good, callbacks = bad"?
...huh?
You do know that callbacks (AKA the use of higher-order functions) are one of
the most fundamental aspects of functional programming, don't you?
Yes, and often it sucks. There is a difference or two though.. in most FPLs
callbacks have a context so its a bit better: the callback is a closure, not
just a context-less function (as in C).
Post by Mason Wheeler
That depends entirely on the complexity of the grammar you're using. For a
lexer
(tokenizer,) being called one character at a time is perfectly reasonable, if a
bit
slower than optimal, and no stack is required.
It's only reasonable if the lexer is *generated* by a tool, which builds
a finite state automaton, or at least an NFA.
Post by Mason Wheeler
For a parser, yes, you're right,
you
should have a stream of tokens available that you can ask for the next token
when
you need it, but it doesn't sound like he's actually describing a recursive
parser.
No, of course not, I just gave that as an example.
Post by Mason Wheeler
There's nothing inherently bad about callbacks.
There's nothing inherently bad about assembler or using gotos either.
It's only bad if you this kind of technology when there's something better.
Post by Mason Wheeler
For a lot of things, they
greatly
simplify the work you need to do. For others, they're not appropriate for the
task,
so you use some other technique instead.
I agree. Callbacks are only useful if the state machine they can conveniently
implement via a single client data object are simple.

The problem isn't callbacks (Felix has callbacks! and as you point out
HOF's often use callbacks). The problem is when you're forced to use
them by a framework and your problem is complex enough you demand
the tools of higher level systems: modularity, integrated data and control
flow using a stack, and if you go even higher level you need things like
garbage collection for memory management.

Obviously for simple jobs assembler is just fine.

Most (non-arcade) games and GUI applications are complex enough that
callbacks alone just won't do.

My point is not that assembler, gotos, procedural code, or OO are bad:
my point is that they're limited. And a second point needs to be made:
better technology like fibres can't be implemented without compiler
support. You can't do block structured programming in assembler
even though you can emulate the principles. You can't do OO in C,
even though you can emulate it. You can't do real functional
programming in C++. And you can't do coroutines
without language support either.

Game programmers are the worst hit by bad technology because games
are the most sophisticated and difficult application around. Which is why
most games are so deficient in many ways .. most "so called" strategy games
have hardly any strategy in them, their unit routing algorithms are non-existent
or suck totally -- even though good algorithms exist -- because the programmers
spend most of their time struggling to implement basic stuff without error,
because the tools they're using aren't up to the job.


--
john skaller
***@users.sourceforge.net
Mason Wheeler
2011-02-22 15:51:41 UTC
Permalink
----- Original Message ----
Subject: Re: [SDL] text input and key down events
Post by Mason Wheeler
So then, if I'm parsing your viewpoint correctly, "OOP = bad, functional
programming = good, callbacks = bad"?
...huh?
You do know that callbacks (AKA the use of higher-order functions) are one of
the most fundamental aspects of functional programming, don't you?
Yes, and often it sucks. There is a difference or two though.. in most FPLs
callbacks have a context so its a bit better: the callback is a closure, not
just a context-less function (as in C).
Good point. I'm used to using Delphi, where callbacks are usually
implemented as method pointers, which are basically simple closures that
provide an object for context. SDL uses a similar principle for its callbacks,
putting a user-defined data pointer as part of the signature for a callback.
Post by Mason Wheeler
That depends entirely on the complexity of the grammar you're using. For a
lexer
(tokenizer,) being called one character at a time is perfectly reasonable, if a
bit
slower than optimal, and no stack is required.
It's only reasonable if the lexer is *generated* by a tool, which builds
a finite state automaton, or at least an NFA.
OK, you lost me again. Parser generation I can understand, but a lexer is dead
simple to hand-roll. I wrote a lexer just a few weeks ago. It took a couple
hours
to write and about 10 minutes to debug. Weighs in at a bit under 400 lines of
code for the business logic, plus a keyword table. Child's play.
Post by Mason Wheeler
There's nothing inherently bad about callbacks.
There's nothing inherently bad about assembler or using gotos either.
It's only bad if you this kind of technology when there's something better.
I assume you're still talking about C callbacks with no context here? Also, why
do you keep ragging on ASM? The structured programming theorems prove
that you can write any program without gotos, but there are some things that
simply can't be done without the use of inline assembly, and IMO any language
with no support for it is crippled.
The problem isn't callbacks (Felix has callbacks! and as you point out
HOF's often use callbacks). The problem is when you're forced to use
them by a framework and your problem is complex enough you demand
the tools of higher level systems: modularity, integrated data and control
flow using a stack, and if you go even higher level you need things like
garbage collection for memory management.
Any examples? I've never run across a scenario where a framework forces
you to use a callback for a situation that's not appropriate for one and it ends
up getting in the way.

Also, WRT garbage collection, I've never encountered any programming
problem, no matter how "high-level," that required it. I consider garbage
collection one of the worst misfeatures of all time. It's only "necessary" in
functional languages because they're designed very poorly, based on
fundamental principles such as "let's pretend we're not *really* running
on a Turing machine." The problem with GC is that it eliminates the
perception of the need to think about memory management, without
eliminating the actual need to think about memory management, thus
eliminating quite a bit of *thinking* that is still necessary. (See
http://tinyurl.com/9ngt74 and http://tinyurl.com/4pxr822 )
Obviously for simple jobs assembler is just fine.
Most (non-arcade) games and GUI applications are complex enough that
callbacks alone just won't do.
I don't know. At work I work on the most complex GUI application I've ever
seen. It weighs in at around 3.5 million lines of Delphi code, and if you live
in the USA and watch TV or listen to the radio, chances are it's running your
station. And, being a GUI application, everything that happens gets kicked
off by an event handler, which is a callback with a method pointer.
my point is that they're limited.
And functional programming isn't? It's difficult to even look at a serious
functional language like Lisp or Haskell without saying "this entire thing is
one big abstraction inversion!"
And a second point needs to be made: better technology like fibres can't
be implemented without compiler support.
Sure it can. I just call the CreateFiber function and I'm good.
http://msdn.microsoft.com/en-us/library/ms682402%28v=vs.85%29.aspx
You can't do block structured programming in assembler
even though you can emulate the principles. You can't do OO in C,
even though you can emulate it. You can't do real functional
programming in C++. And you can't do coroutines
without language support either.
You know what the difference between doing something the "real" way,
with language support, and emulating the principles is? When it's not
built into the language as a fundamental abstraction, you can debug
it, examine it, and find ways to improve it. When the language abstracts
all that away, you lose that ability.

That's part of the reason why I like Delphi. You can do extremely high-level
stuff in it, including functional programming with real, language-supported
closures, but you can also go as low as you need to, all the way down to
inline ASM if necessary.
Game programmers are the worst hit by bad technology because games
are the most sophisticated and difficult application around. Which is why
most games are so deficient in many ways .. most "so called" strategy games
have hardly any strategy in them, their unit routing algorithms are non-existent
or suck totally -- even though good algorithms exist -- because the programmers
spend most of their time struggling to implement basic stuff without error,
because the tools they're using aren't up to the job.
Most of that can be laid at the feet of C++. If game programmers were smart,
they'd use a language with a real object model and decent OO semantics, and
half their trouble would vanish.
Jeff Post
2011-02-22 16:31:22 UTC
Permalink
Post by Mason Wheeler
That depends entirely on the complexity of the grammar you're using. For a
lexer
(tokenizer,) being called one character at a time is perfectly reasonable,
if a bit
slower than optimal, and no stack is required. For a parser, yes, you're
right, you
should have a stream of tokens available that you can ask for the next
token when
you need it, but it doesn't sound like he's actually describing a recursive
parser.
I'm not. I've written (simple) assemblers and compilers, so I know what
parsing is. In this application the "parsing" consists simply of separating a
text input line into specific fields, But it must do so in an OS independent
way.

I really don't understand why John thinks master/slave is relevant here. The
user presses a key when he's darned well and ready. Of course the application
is driven by the data--it can't be otherwise. I think this conversation has
devolved into silliness.

Jeff
Jeff Post
2011-02-22 16:23:49 UTC
Permalink
Post by john skaller
i don't mean to be offensive but ..
"I am fine with assembler I don't need high level languages"
"I think gotos are just fine, I don't need block structured programming languages"
"Procedural code is fine, I don't need functional programming"
"I am happy with object orientation".
"I am fine with callbacks"
Its all the same. You don't understand why what you're doing is bad,
you're used to it and you think it is ok.
It isn't OK.
I don't think that's offensive; I think it's silly. You're making assumptions
that aren't valid.

Jeff
jon
2011-02-22 16:55:44 UTC
Permalink
Post by john skaller
Post by Jeff Post
The way I handle it is to define a general widget type (structure or class,
depending on language). Then each widget (text input, text output, file
selector, etc) attaches callbacks for the types of events they need to
handle. Events are processed by passing them to a processCallBack function
which runs through the widget list and passes the event to the callback
function for the topmost widget in the list that has registered a callback
function for that type of event. The widget that processes the event then
becomes the topmost widget.
That's quite a sane algorithm, although it has one problem: callbacks.
Hard to avoid in C.
Callbacks suck because you lose the stack: your per-widget code becomes
a slave of the event loop.
There are two ways around this. One is to use threads. This also sucks
because it is overkill (consumes resources).
The other is to use a better language :) I think (not sure) Go has channels
proc dispatch_event(
keyboard:schannel[SDL_keysym],
active:schannel[SDL_ActiveEvent],
resize:schannel[SDL_ResizeEvent]
)
{
whilst true do
var e : SDL_Event;
poll_event(&e);
match get_type e with
| ?et when et == SDL_ACTIVEEVENT=>
{ write (active, e.active); }
| ?et when et == SDL_VIDEORESIZE=>
{ write (resize, e.resize); }
| ?et when et == SDL_KEYDOWN=>
{ write (keyboard, e.key.keysym); }
| ?et when et == SDL_QUIT=>
{ Quit 0; }
| _ => {}
endmatch;
done;
}
Notice that this code does not **seem** to be invoking a callback.
It calls a routine to get an event and dispatches the event down
one of two channels.
proc resizechan(x:schannel[SDL_ResizeEvent])
{
whilst true do
handle_resize$ read x;
done;
}
The program units are modular, they all have a stack with
local variables (or so it seems ..). They have the look and
feel of threads.
You might be interested in Tame, which I think accomplishes something
close to your goal.
http://pdos.csail.mit.edu/papers/tame-usenix07.pdf

Which is part of the webserver that runs okcupid.com:
http://okws.org/doku.php?id=okws
john skaller
2011-02-22 18:35:18 UTC
Permalink
Post by jon
Post by john skaller
The program units are modular, they all have a stack with
local variables (or so it seems ..). They have the look and
feel of threads.
You might be interested in Tame, which I think accomplishes something
close to your goal.
http://pdos.csail.mit.edu/papers/tame-usenix07.pdf
However it seems Tame is used to manage real concurrency?

--
john skaller
***@users.sourceforge.net
Rainer Deyke
2011-02-22 03:11:26 UTC
Permalink
Post by Jeff Post
Post by Rainer Deyke
Post by Rainer Deyke
I have a GUI program that listens for both key down and text input
events. The intended behavior is that each key is processed exactly
once, by exactly one GUI widget. However, when the user presses the the
'1' key (or any other text input key), it can be processed twice: once
as a text input event by a text input widget and once as a shortcut key
by another widget. The text input widget cannot consume the key down
event because it can't match the key down event to the corresponding
text input event.
What is the recommended way to handle this? Is there a way to detect if
a particular key down event is used for text input so that I can
suppress these events while listening for text input?
As a temporary hack, I am setting the text input widget to consume all
key down events where the keysym is in unicode range (i.e. 30th bit not
set), unless either ctrl or alt is pressed. This seems to work.
Why is that a hack?
It's a hack because I have no way to tell if I'm inadvertently consuming
key down events that are not used for text input, or if I'm
inadvertently failing to consume key down events that /are/ used for
text input. My algorithm for detecting if a key is used for text input
is based on guesswork and intuition, not actual knowledge. In fact, I'm
pretty sure that my algorithm is incorrect in some corner cases. For
example, it fails to consume the key down events for modifier keys that
are used for text input.

Even if my algorithm were correct, it would belong in SDL, not in user code.
--
Rainer Deyke - ***@eldwood.com
Jeff Post
2011-02-22 04:02:16 UTC
Permalink
Post by Rainer Deyke
It's a hack because I have no way to tell if I'm inadvertently consuming
key down events that are not used for text input, or if I'm
inadvertently failing to consume key down events that /are/ used for
text input. My algorithm for detecting if a key is used for text input
is based on guesswork and intuition, not actual knowledge. In fact, I'm
pretty sure that my algorithm is incorrect in some corner cases. For
example, it fails to consume the key down events for modifier keys that
are used for text input.
Okay. Without seeing your code I have no way of knowing if my code would be of
use to you. Thought it would be nice to offer you my code though, in case you
might find it useful.
Post by Rainer Deyke
Even if my algorithm were correct, it would belong in SDL, not in user code.
You might be right, but the biggest advantage I see in SDL is that it doesn't
force any particular methodology on developers. It merely provides a platform
independent way to access low-level functions.

My offer still stands. Use what might be of use to you and disregard the rest.

Jeff
Rainer Deyke
2011-02-22 06:56:40 UTC
Permalink
Post by Jeff Post
Post by Rainer Deyke
It's a hack because I have no way to tell if I'm inadvertently consuming
key down events that are not used for text input, or if I'm
inadvertently failing to consume key down events that /are/ used for
text input. My algorithm for detecting if a key is used for text input
is based on guesswork and intuition, not actual knowledge. In fact, I'm
pretty sure that my algorithm is incorrect in some corner cases. For
example, it fails to consume the key down events for modifier keys that
are used for text input.
Okay. Without seeing your code I have no way of knowing if my code would be of
use to you. Thought it would be nice to offer you my code though, in case you
might find it useful.
Here is the code I'm talking about:

if (!(key & 0x20000000) && !(mod & (KMOD_CTRL | KMOD_ALT))) {
return true;
}
Post by Jeff Post
Post by Rainer Deyke
Even if my algorithm were correct, it would belong in SDL, not in user code.
You might be right, but the biggest advantage I see in SDL is that it doesn't
force any particular methodology on developers. It merely provides a platform
independent way to access low-level functions.
Detecting if a given keyboard input event is linked to text input is
basic, low-level stuff. There is way to clean way to implement it on
top of the API SDL provides.
--
Rainer Deyke - ***@eldwood.com
Jonathan Dearborn
2011-02-22 13:42:39 UTC
Permalink
I'm not sure if I totally get your problem, but I'll tell you my
approach to shortcuts.

The gui system keeps a list of the registered shortcuts. When a key
is pressed, the event is checked against these shortcuts. If it
matches one, then the event stops there and the shortcut is activated.
Otherwise it goes on to be processed by the widget which currently
has keyboard focus.

Jonny D
Post by Jeff Post
Post by Rainer Deyke
It's a hack because I have no way to tell if I'm inadvertently consuming
key down events that are not used for text input, or if I'm
inadvertently failing to consume key down events that /are/ used for
text input.  My algorithm for detecting if a key is used for text input
is based on guesswork and intuition, not actual knowledge.  In fact, I'm
pretty sure that my algorithm is incorrect in some corner cases.  For
example, it fails to consume the key down events for modifier keys that
are used for text input.
Okay. Without seeing your code I have no way of knowing if my code would be of
use to you. Thought it would be nice to offer you my code though, in case you
might find it useful.
       if (!(key & 0x20000000) && !(mod & (KMOD_CTRL | KMOD_ALT))) {
         return true;
       }
Post by Jeff Post
Post by Rainer Deyke
Even if my algorithm were correct, it would belong in SDL, not in user code.
You might be right, but the biggest advantage I see in SDL is that it doesn't
force any particular methodology on developers. It merely provides a platform
independent way to access low-level functions.
Detecting if a given keyboard input event is linked to text input is
basic, low-level stuff.  There is way to clean way to implement it on
top of the API SDL provides.
--
_______________________________________________
SDL mailing list
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Rainer Deyke
2011-02-22 20:23:47 UTC
Permalink
Post by Jonathan Dearborn
I'm not sure if I totally get your problem, but I'll tell you my
approach to shortcuts.
My problem is simple. When I press the 'a' key I get two events from
SDL: a key down event (SDLK_a) and a text input event ("a"). Widget A
listens for text input events; widget B listens for key down events.
Widget A has priority over widget B. I need to get widget A to consume
all key down events that are used for text input to prevent the same key
from being processed twice, once as a key down event and once as a text
input event. The problem is accurately identifying the key down events
that are used for text input.

My current strategy is for widget A to consume all key down events where
the keysym is in the unicode range and neither ctrl nor alt is pressed.
This is a heuristic that /seems/ to work, but is probably inaccurate.

What I want is simple and accurate way to identify key down events that
are used for text input.

Actually dispatching the events to the widgets is trivial. The only
difficulty is identifying which events should be dispatched and which
should be consumed.
--
Rainer Deyke - ***@eldwood.com
Jjgod Jiang
2011-02-22 16:14:03 UTC
Permalink
Post by Rainer Deyke
Post by Jeff Post
You might be right, but the biggest advantage I see in SDL is that it doesn't
force any particular methodology on developers. It merely provides a platform
independent way to access low-level functions.
Detecting if a given keyboard input event is linked to text input is
basic, low-level stuff.  There is way to clean way to implement it on
top of the API SDL provides.
Since I am the one implemented text input events for SDL/Cocoa, here is
my comment about this issue:

I have definitely thought about this when I designed the API, however, SDL
on Mac works by intercepting key events in the lowest level (app event
dispatch queue) and send it to the application directly, while text
input events are somehow at a higher level, provided by NSView NSTextInput
protocol. So basically when we intercepting all the key events, we don't
know which one of them is going to trigger text input events, so we
can't selectively send or drop any one of them.

My suggestion is that you should disable text input events when you don't
need them (like in a typical GUI toolkit, it will only accept text input
when the text cursor is focused in a certain text box), and when you do
need them, ignore most of the key events except the ones you can tell that's
not going to be a text input (functional keys, ctrl/option key combination,
etc.)

- Jiang
Rainer Deyke
2011-02-22 20:40:42 UTC
Permalink
Post by Jjgod Jiang
I have definitely thought about this when I designed the API, however, SDL
on Mac works by intercepting key events in the lowest level (app event
dispatch queue) and send it to the application directly, while text
input events are somehow at a higher level, provided by NSView NSTextInput
protocol. So basically when we intercepting all the key events, we don't
know which one of them is going to trigger text input events, so we
can't selectively send or drop any one of them.
I see. That makes things more difficult. However, if I'm going to rely
on guesswork to identify key down events that are used for text input
/anyway/, then I still think it makes more sense to put this guesswork
in SDL than in user code. That way the same functionality can be used
and tested by many people instead of just one. The change could be as
simple as adding an additional field to SDL_KeyboardEvent:
Uint8 text; // Non-zero if SDL thinks that this
// keyboard event may be used for text input.
--
Rainer Deyke - ***@eldwood.com
Sam Lantinga
2011-02-23 04:10:49 UTC
Permalink
This is actually a decent assumption. The keycodes which generate
printable characters have a printable value by design.

As Jiang mentioned, it's often not possible to tell in advance whether
keyboard input is going to be used to compose text input.
Post by Rainer Deyke
Post by Rainer Deyke
I have a GUI program that listens for both key down and text input
events.  The intended behavior is that each key is processed exactly
once, by exactly one GUI widget.  However, when the user presses the the
'1' key (or any other text input key), it can be processed twice: once
as a text input event by a text input widget and once as a shortcut key
by another widget.  The text input widget cannot consume the key down
event because it can't match the key down event to the corresponding
text input event.
What is the recommended way to handle this?  Is there a way to detect if
a particular key down event is used for text input so that I can
suppress these events while listening for text input?
As a temporary hack, I am setting the text input widget to consume all
key down events where the keysym is in unicode range (i.e. 30th bit not
set), unless either ctrl or alt is pressed.  This seems to work.
--
_______________________________________________
SDL mailing list
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
--
    -Sam Lantinga, Founder and CEO, Galaxy Gameworks
Jared Maddox
2011-02-23 07:27:40 UTC
Permalink
Sorry about the huge message, I hadn't checked email in a few days.

Date: Tue, 22 Feb 2011 12:37:18 +1100
From: john skaller <***@users.sourceforge.net>
To: SDL Development List <***@lists.libsdl.org>
Subject: Re: [SDL] text input and key down events
Message-ID:
<27D6BFA9-3C52-48A7-9BF2-***@users.sourceforge.net>
Content-Type: text/plain; charset=us-ascii
Post by john skaller
Post by Jeff Post
The way I handle it is to define a general widget type (structure or class,
depending on language). Then each widget (text input, text output, file
selector, etc) attaches callbacks for the types of events they need to
handle. Events are processed by passing them to a processCallBack function
which runs through the widget list and passes the event to the callback
function for the topmost widget in the list that has registered a callback
function for that type of event. The widget that processes the event then
becomes the topmost widget.
That's quite a sane algorithm, although it has one problem: callbacks.
Hard to avoid in C.
Callbacks suck because you lose the stack: your per-widget code becomes
a slave of the event loop.
There are two ways around this. One is to use threads. This also sucks
because it is overkill (consumes resources).
The other is to use a better language :) I think (not sure) Go has channels
You don't sound like you use C very much. There's one truly good way
to give C callbacks closure: a void pointer. Example function
prototype:
int accumulate( void *closure, int value );
If you need to dynamically resize the closure, and you don't want to
have another pointer inside of the closure, then you'd do this:
int accumulate( void **closure, int value );
I won't claim to be a big fan of C's pointer syntax (though the syntax
itself is my only real problem with pointers), but it works, it
reflects the way things really work (which is good for a 'portable
assembly language' like C), and it's a simple way to enable
reentrancy.
Post by john skaller
It's a great pity some of the API's SDL uses are forced to be callback
driven (Audio I believe). The really big advantage of SDL is that
it is a library NOT a framework that forces you to do everything with
callbacks.
The relevant structure is this:
typedef struct SDL_AudioSpec
{
int freq; /**< DSP frequency -- samples per second */
SDL_AudioFormat format; /**< Audio data format */
Uint8 channels; /**< Number of channels: 1 mono, 2 stereo */
Uint8 silence; /**< Audio buffer silence value (calculated) */
Uint16 samples; /**< Audio buffer size in samples
(power of 2) */
Uint16 padding; /**< Necessary for some compile environments */
Uint32 size; /**< Audio buffer size in bytes (calculated) */

SDL_AudioCallback callback;
void *userdata;
} SDL_AudioSpec;
It's in include/SDL_audio.h. I've separated out the two relevant data
members: callback and userdata. You get to supply both, and userdata
is used as one of the arguments when callback is called. You then cast
it in the function to whatever type you want. For example, if you were
writing in C++, you would probably cast it to an object, and then call
a member function of that object. A binding for another language (for
example, a functional language) would likely do the same.

------------------------------

Date: Wed, 23 Feb 2011 00:41:58 +1100
From: john skaller <***@users.sourceforge.net>
To: SDL Development List <***@lists.libsdl.org>
Subject: Re: [SDL] text input and key down events
Message-ID:
<8E170E52-9BCC-4C48-8471-***@users.sourceforge.net>
Content-Type: text/plain; charset=us-ascii
Post by john skaller
Post by Jeff Post
Post by john skaller
That's quite a sane algorithm, although it has one problem: callbacks.
Hard to avoid in C.
Callbacks suck because you lose the stack: your per-widget code becomes
a slave of the event loop.
Not a problem in the applications I write. Don't know about games though since
I don't write games.
i don't mean to be offensive but ..
"I am fine with assembler I don't need high level languages"
"I think gotos are just fine, I don't need block structured programming languages"
"Procedural code is fine, I don't need functional programming"
"I am happy with object orientation".
This one I do actually object to, with a caveat: A shift from a
language that supports operator-overloading for objects to ANYTHING
that doesn't is a genuine downgrade (No, I'm not a fan of Java, though
I do UNDERSTAND why they didn't include this particular feature).
Post by john skaller
"I am fine with callbacks"
Its all the same. You don't understand why what you're doing is bad, you're
used to it and you think it is ok.
It isn't OK.
Actually, it is okay. You need to remember: C is the most portable
assembly language known to man. Certain dialects of Forth might rival
it, but I'm not certain that even those do, and Forth is barely
structured (in fact, does it actually count as structured?).
Post by john skaller
Post by Jeff Post
Post by john skaller
But they're not. The execution model of the underlying code
is callbacks.
Uh, okay. Then I fail to see the difference.
You can't see the difference between C and machine code?
It's called automation: the compiler (in both cases) does a lot
of tedious housework for you and gets it right every time (hopefully :)
This is what language bindings exist for: to abstract concepts that
are appropriate for the language that the bound code interfaces
through, but either not appropriate or needlessly inconvenient in
another language that uses said interface.
Post by john skaller
Post by Jeff Post
Post by john skaller
BTW: to understand how important control inversion is, think about parsing
a file with a subroutine called with a single character at a time, as
opposed to reading the data. When you read, you're the master. When you're
called with the data you're a slave.
Funny you should mention that. My latest application does read a file one
character at a time because it needs to parse files written on Linux (LF only
newline), Windows (CR/LF newline), and Mac (CR only newline). Whether it does
so as master or slave is not relevant to the application.
The issue isn't whether you read the file one character at a time, but whether you read
the character or are called with it.
The differences is very relevant to the complexity of the code you write.
For example analysing expressions requires recursion. To do recursion you must
have a stack.
Correct. Personally, I've found that the 'slave' model (specifically,
callback with void pointer) is very useful, because I'm parsing
character-by-character anyways. However, in other situations I've
preferred the 'master' model. It depends on the situation, and
preferring one or the other will just cause you grief.

------------------------------

Date: Wed, 23 Feb 2011 01:50:15 +1100
From: john skaller <***@users.sourceforge.net>
To: SDL Development List <***@lists.libsdl.org>
Subject: Re: [SDL] text input and key down events
Message-ID:
<6AF93FF3-C949-462C-876B-***@users.sourceforge.net>
Content-Type: text/plain; charset=us-ascii
Post by john skaller
Post by Jeff Post
That depends entirely on the complexity of the grammar you're using. For a
lexer
(tokenizer,) being called one character at a time is perfectly reasonable, if a
bit
slower than optimal, and no stack is required.
It's only reasonable if the lexer is *generated* by a tool, which builds
a finite state automaton, or at least an NFA.
Actually, I'm writing a C-like parser/lexer at the moment, and with
better string functions the only really annoying thing would be the
initial comment/string parsing. Everything else is decently simple so
far, you just need to know when to do what step (note: I'm not trying
to parse C itself, it has some quirks that I disagree with, some of
which complicate parsing ;) ).
Post by john skaller
The problem isn't callbacks (Felix has callbacks! and as you point out
HOF's often use callbacks). The problem is when you're forced to use
them by a framework and your problem is complex enough you demand
the tools of higher level systems: modularity, integrated data and control
flow using a stack,
That's what bindings in other languages are for. SDL is supposed to be
portable, and thus a highly portable language is the appropriate one
to have the interface in. Bindings for other languages can then be
written to provide a behaviorally equivalent interface in the targeted
language.
Post by john skaller
Most (non-arcade) games and GUI applications are complex enough that
callbacks alone just won't do.
Which is why the profesional game developers mostly moved to languages
other than C some time ago. C++ (an obese, soul-sucking monstrosity,
sure, but fast, and when using a small enough subset of it's features
understandable), Lua, Objective-C, etc. all offer feature improvements
over C, and thus have supplanted it's use in various areas.
Post by john skaller
better technology like fibres can't be implemented without compiler
support.
Hahahahaha! Seriously, it's a hideous hack that might (AND SHOULD!)
cause your hair to fall out, BUT an extension of the C language's
setjump/longjmp facility is sufficient to implement continuations, and
therefor sufficient for both fibers that don't use globals (and
technically those too, with entrance/exit functions) and coroutines.
You have some reasonable ideas, but this bit is totally off-base.

------------------------------

Date: Tue, 22 Feb 2011 07:51:41 -0800 (PST)
From: Mason Wheeler <***@yahoo.com>
To: SDL Development List <***@lists.libsdl.org>
Subject: Re: [SDL] text input and key down events
Message-ID: <***@web161209.mail.bf1.yahoo.com>
Content-Type: text/plain; charset=us-ascii
Post by john skaller
Subject: Re: [SDL] text input and key down events
Post by Jeff Post
The problem isn't callbacks (Felix has callbacks! and as you point out
HOF's often use callbacks). The problem is when you're forced to use
them by a framework and your problem is complex enough you demand
the tools of higher level systems: modularity, integrated data and control
flow using a stack, and if you go even higher level you need things like
garbage collection for memory management.
<snip>
Post by john skaller
Also, WRT garbage collection, I've never encountered any programming
problem, no matter how "high-level," that required it. I consider garbage
collection one of the worst misfeatures of all time.
It has enabled the success of many Java programmers who otherwise
would never make it in the marketplace, but I'm not certain whether
that's a mark in the 'for' column or one in the 'against' column.
Post by john skaller
It's only "necessary" in
functional languages because they're designed very poorly, based on
fundamental principles such as "let's pretend we're not *really* running
on a Turing machine." The problem with GC is that it eliminates the
perception of the need to think about memory management, without
eliminating the actual need to think about memory management, thus
eliminating quite a bit of *thinking* that is still necessary. (See
http://tinyurl.com/9ngt74 and http://tinyurl.com/4pxr822 )
GC (or rather, automatic memory management, since I consider GC to not
encompase the entirety of AMM) is fairly intrinsic to declarative
languages, since you aren't supposed to ACTUALLY know how your code
works, but you're right in that attention needs to be paid to memory
during coding, since there's no guarantee that the compiler will go
with a memory-lean alternative to your memory-hungry algorithm (or
even recognize the algorithm as you wrote it!).
Post by john skaller
Post by Jeff Post
And a second point needs to be made: better technology like fibres can't
be implemented without compiler support.
Sure it can. I just call the CreateFiber function and I'm good.
http://msdn.microsoft.com/en-us/library/ms682402%28v=vs.85%29.aspx
Amen.
Post by john skaller
Post by Jeff Post
Game programmers are the worst hit by bad technology because games
are the most sophisticated and difficult application around. Which is why
most games are so deficient in many ways .. most "so called" strategy games
have hardly any strategy in them, their unit routing algorithms are
non-existent or suck totally -- even though good algorithms exist --
because the programmers spend most of their time struggling to implement
basic stuff without error, because the tools they're using aren't up to
the job.
Most of that can be laid at the feet of C++. If game programmers were smart,
they'd use a language with a real object model and decent OO semantics, and
half their trouble would vanish.
It's not even C++ really, MOST of the trouble is from (at least
initial) bad implementations (e.g. VC++6), and poor libraries.
Seriously, auto_ptr was the best they could agree on? He even mentions
graphics at one point in TC++PL 2nd ed, which has never been part of
the standard.

As for the object model, what do you want, prototypes? Maybe quajects?
While C++ classes may have a few bugs in their design (I would have
preferred that classes NOT just be structs that default to private),
classes in general are appropriate for close-to-the-metal languages
like C++, dynamically extended prototype objects like in JavaScript &
co. aren't.
Mason Wheeler
2011-02-23 14:14:49 UTC
Permalink
----- Original Message ----
Subject: Re: [SDL] text input and key down events
Post by Mason Wheeler
Also, WRT garbage collection, I've never encountered any programming
problem, no matter how "high-level," that required it. I consider garbage
collection one of the worst misfeatures of all time.
It has enabled the success of many Java programmers who otherwise
would never make it in the marketplace, but I'm not certain whether
that's a mark in the 'for' column or one in the 'against' column.
Well, I'd put "facilitates the writing of bad code by incompetent
programmers" squarely in the "against" column.
Post by Mason Wheeler
Most of that can be laid at the feet of C++. If game programmers were smart,
they'd use a language with a real object model and decent OO semantics, and
half their trouble would vanish.
As for the object model, what do you want, prototypes? Maybe quajects?
While C++ classes may have a few bugs in their design (I would have
preferred that classes NOT just be structs that default to private),
classes in general are appropriate for close-to-the-metal languages
like C++, dynamically extended prototype objects like in JavaScript &
co. aren't.
First and most obvious, the whole point of object-oriented programming is
inheritance and polymorphism. These concepts do not mix with the use of
objects as value types. For example, what's the output of this simple C++
program?


#include <iostream>


class Parent
{
public:
int a;
int b;
int c;

Parent(int ia, int ib, int ic) {
a = ia; b = ib; c = ic;
};

virtual void doSomething(void) {
std::cout << "Parent doSomething" << std::endl;
}
};

class Child : public Parent {
public:
int d;
int e;

Child(int id, int ie) : Parent(1,2,3) {
d = id; e = ie;
};
virtual void doSomething(void) {
std::cout << "Child doSomething : D = " << d << std::endl;
}
};

void foo(Parent a) {
a.doSomething();
}

int main(void)
{
Child c(4, 5);
foo(c);
return 0;
}

Second, with no base object class, there's no way to create a function that
takes an object of any type as a parameter, which is incredibly limiting.

I could go on, but you get the general idea. C++'s object model is
completely broken on several fundamental levels.
Jonathan Dearborn
2011-02-23 16:35:25 UTC
Permalink
What can you do with a plain Object other than store it? C++ uses
generics for this, not polymorphism. How does Delphi do objects
differently?

Jonny D
Post by Mason Wheeler
----- Original Message ----
Subject: Re: [SDL] text input and key down events
Post by Mason Wheeler
Also, WRT garbage collection, I've never encountered any programming
problem, no matter how "high-level," that required it.  I consider garbage
collection one of the worst misfeatures of all time.
It has enabled the success of many Java programmers who otherwise
would never make it in the marketplace, but I'm not certain whether
that's a mark in the 'for' column or one in the 'against' column.
Well, I'd put "facilitates the writing of bad code by incompetent
programmers" squarely in the "against" column.
Post by Mason Wheeler
Most of that can be laid at the feet of C++.  If game programmers were smart,
they'd use a language with a real object model and decent OO semantics, and
half their trouble would vanish.
As for the object model, what do you want, prototypes? Maybe quajects?
While C++ classes may have a few bugs in their design (I would have
preferred that classes NOT just be structs that default to private),
classes in general are appropriate for close-to-the-metal languages
like C++, dynamically extended prototype objects like in JavaScript &
co. aren't.
First and most obvious, the whole point of object-oriented programming is
inheritance and polymorphism. These concepts do not mix with the use of
objects as value types.  For example, what's the output of this simple C++
program?
#include <iostream>
class Parent
{
  int a;
  int b;
  int c;
  Parent(int ia, int ib, int ic) {
     a = ia; b = ib; c = ic;
  };
  virtual void doSomething(void) {
     std::cout << "Parent doSomething" << std::endl;
  }
};
class Child : public Parent {
  int d;
  int e;
  Child(int id, int ie) : Parent(1,2,3) {
     d = id; e = ie;
  };
  virtual void doSomething(void) {
     std::cout << "Child doSomething : D = " << d << std::endl;
  }
};
void foo(Parent a) {
  a.doSomething();
}
int main(void)
{
  Child c(4, 5);
  foo(c);
  return 0;
}
Second, with no base object class, there's no way to create a function that
takes an object of any type as a parameter, which is incredibly limiting.
I could go on, but you get the general idea.  C++'s object model is
completely broken on several fundamental levels.
_______________________________________________
SDL mailing list
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Jonathan Dearborn
2011-02-23 16:35:56 UTC
Permalink
Oh, and this should probably move to a new thread... Probably on the
gameprogrammer list...

Jonny D
What can you do with a plain Object other than store it?  C++ uses
generics for this, not polymorphism.  How does Delphi do objects
differently?
Jonny D
Post by Mason Wheeler
----- Original Message ----
Subject: Re: [SDL] text input and key down events
Post by Mason Wheeler
Also, WRT garbage collection, I've never encountered any programming
problem, no matter how "high-level," that required it.  I consider garbage
collection one of the worst misfeatures of all time.
It has enabled the success of many Java programmers who otherwise
would never make it in the marketplace, but I'm not certain whether
that's a mark in the 'for' column or one in the 'against' column.
Well, I'd put "facilitates the writing of bad code by incompetent
programmers" squarely in the "against" column.
Post by Mason Wheeler
Most of that can be laid at the feet of C++.  If game programmers were smart,
they'd use a language with a real object model and decent OO semantics, and
half their trouble would vanish.
As for the object model, what do you want, prototypes? Maybe quajects?
While C++ classes may have a few bugs in their design (I would have
preferred that classes NOT just be structs that default to private),
classes in general are appropriate for close-to-the-metal languages
like C++, dynamically extended prototype objects like in JavaScript &
co. aren't.
First and most obvious, the whole point of object-oriented programming is
inheritance and polymorphism. These concepts do not mix with the use of
objects as value types.  For example, what's the output of this simple C++
program?
#include <iostream>
class Parent
{
  int a;
  int b;
  int c;
  Parent(int ia, int ib, int ic) {
     a = ia; b = ib; c = ic;
  };
  virtual void doSomething(void) {
     std::cout << "Parent doSomething" << std::endl;
  }
};
class Child : public Parent {
  int d;
  int e;
  Child(int id, int ie) : Parent(1,2,3) {
     d = id; e = ie;
  };
  virtual void doSomething(void) {
     std::cout << "Child doSomething : D = " << d << std::endl;
  }
};
void foo(Parent a) {
  a.doSomething();
}
int main(void)
{
  Child c(4, 5);
  foo(c);
  return 0;
}
Second, with no base object class, there's no way to create a function that
takes an object of any type as a parameter, which is incredibly limiting.
I could go on, but you get the general idea.  C++'s object model is
completely broken on several fundamental levels.
_______________________________________________
SDL mailing list
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Sam Lantinga
2011-02-23 19:09:30 UTC
Permalink
Yes please :)
Oh, and this should probably move to a new thread...  Probably on the
gameprogrammer list...
Jonny D
What can you do with a plain Object other than store it?  C++ uses
generics for this, not polymorphism.  How does Delphi do objects
differently?
Jonny D
Post by Mason Wheeler
----- Original Message ----
Subject: Re: [SDL] text input and key down events
Post by Mason Wheeler
Also, WRT garbage collection, I've never encountered any programming
problem, no matter how "high-level," that required it.  I consider garbage
collection one of the worst misfeatures of all time.
It has enabled the success of many Java programmers who otherwise
would never make it in the marketplace, but I'm not certain whether
that's a mark in the 'for' column or one in the 'against' column.
Well, I'd put "facilitates the writing of bad code by incompetent
programmers" squarely in the "against" column.
Post by Mason Wheeler
Most of that can be laid at the feet of C++.  If game programmers were smart,
they'd use a language with a real object model and decent OO semantics, and
half their trouble would vanish.
As for the object model, what do you want, prototypes? Maybe quajects?
While C++ classes may have a few bugs in their design (I would have
preferred that classes NOT just be structs that default to private),
classes in general are appropriate for close-to-the-metal languages
like C++, dynamically extended prototype objects like in JavaScript &
co. aren't.
First and most obvious, the whole point of object-oriented programming is
inheritance and polymorphism. These concepts do not mix with the use of
objects as value types.  For example, what's the output of this simple C++
program?
#include <iostream>
class Parent
{
  int a;
  int b;
  int c;
  Parent(int ia, int ib, int ic) {
     a = ia; b = ib; c = ic;
  };
  virtual void doSomething(void) {
     std::cout << "Parent doSomething" << std::endl;
  }
};
class Child : public Parent {
  int d;
  int e;
  Child(int id, int ie) : Parent(1,2,3) {
     d = id; e = ie;
  };
  virtual void doSomething(void) {
     std::cout << "Child doSomething : D = " << d << std::endl;
  }
};
void foo(Parent a) {
  a.doSomething();
}
int main(void)
{
  Child c(4, 5);
  foo(c);
  return 0;
}
Second, with no base object class, there's no way to create a function that
takes an object of any type as a parameter, which is incredibly limiting.
I could go on, but you get the general idea.  C++'s object model is
completely broken on several fundamental levels.
_______________________________________________
SDL mailing list
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
_______________________________________________
SDL mailing list
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
--
    -Sam Lantinga, Founder and CEO, Galaxy Gameworks
Jared Maddox
2011-02-23 17:35:21 UTC
Permalink
Date: Wed, 23 Feb 2011 06:14:49 -0800 (PST)
Subject: Re: [SDL] text input and key down events
Content-Type: text/plain; charset=us-ascii
----- Original Message ----
Subject: Re: [SDL] text input and key down events
Post by Mason Wheeler
Also, WRT garbage collection, I've never encountered any programming
problem, no matter how "high-level," that required it.  I consider garbage
collection one of the worst misfeatures of all time.
It has enabled the success of many Java programmers who otherwise
would never make it in the marketplace, but I'm not certain whether
that's a mark in the 'for' column or one in the 'against' column.
Well, I'd put "facilitates the writing of bad code by incompetent
programmers" squarely in the "against" column.
Post by Mason Wheeler
Most of that can be laid at the feet of C++.  If game programmers were smart,
they'd use a language with a real object model and decent OO semantics, and
half their trouble would vanish.
As for the object model, what do you want, prototypes? Maybe quajects?
While C++ classes may have a few bugs in their design (I would have
preferred that classes NOT just be structs that default to private),
classes in general are appropriate for close-to-the-metal languages
like C++, dynamically extended prototype objects like in JavaScript &
co. aren't.
First and most obvious, the whole point of object-oriented programming is
inheritance and polymorphism. These concepts do not mix with the use of
objects as value types.
I somewhat disagree with this, if I was designing it, I wouldn't
include inheritance in a modern OO language. Interfaces, coercion and
composition provide the same advantages as inheritance, with better
behavior in various cases.
 For example, what's the output of this simple C++
program?
<snip>
The parent version gets called. In order for the child version to be
called in that code, you'd need quajects, prototypes, or references,
one of which C++ actually has. Seriously, it's a horribly obese
language (Turing-complete twice? By accident? Seriously?), but for
when it was first designed, the object support is pretty good.
Second, with no base object class, there's no way to create a function that
takes an object of any type as a parameter, which is incredibly limiting.
I could go on, but you get the general idea.  C++'s object model is
completely broken on several fundamental levels.
You're making mountains out of mole hills. C++ template functions make
it easy to wrap an identifier object around any type. Example:

class GenericIdentifier
{
public:
/* intptr_t isn't part of the standard, but it's easy to provide,
often present anyways, and is expected to be part of the next
standard. */
template< class T >
static intptr_t defineType()
{
T *id;
return( reinterpret_cast< intptr_t >( &id ) );
};
virtual intptr_t identifyThisType()
{
return( defineType< void >() );
};
};
template< class T >
class Identifier : public GenericIdentifier
{
public:
T &data;
virtual intptr_t identifyThisType()
{
return( defineType< T >() );
};
};

You then just define two versions of your function: A template version
that wraps arguments in an Identifier instance, and a version that
takes Identifier instances. This is an easy thing to deal with.
Date: Wed, 23 Feb 2011 11:35:25 -0500
Subject: Re: [SDL] text input and key down events
Content-Type: text/plain; charset=ISO-8859-1
What can you do with a plain Object other than store it?  C++ uses
generics for this, not polymorphism.  How does Delphi do objects
differently?
From looking it up, Delphi's primary type of object (which uses the
'class' keyword) is always dynamically allocated, and doesn't use a
pointer syntax, like in Java. However, it isn't garbage collected, so
you wind up with inconsistent code appearances. Fortunately, Delphi
classes do seem to include destructors (though I didn't read much on
them last night), so that's a definite improvement over Java.
Mason Wheeler
2011-02-23 18:18:08 UTC
Permalink
----- Original Message ----
Subject: Re: [SDL] text input and key down events
Post by Mason Wheeler
First and most obvious, the whole point of object-oriented programming is
inheritance and polymorphism. These concepts do not mix with the use of
objects as value types.
I somewhat disagree with this, if I was designing it, I wouldn't
include inheritance in a modern OO language. Interfaces, coercion and
composition provide the same advantages as inheritance, with better
behavior in various cases.
Well, I've heard "inheritance is bad" repeated as an article of faith by a
handful
of coders in the last few months, especially since Go started gaining
popularity,
but I've never heard any actual justification for it. And your proposed
alternatives
are all a lot more work to implement than simple inheritance, so what are the
benefits that make it worth the extra work?
Post by Mason Wheeler
For example, what's the output of this simple C++
program?
<snip>
The parent version gets called.
Yep. The parent version gets called, after a hidden copy constructor
call to an auto-generated copy constructor. In every other OO language
I'm familiar with, all objects are reference types and all value types
cannot be extended through inheritance, and there's a good reason
for that.
Post by Mason Wheeler
Second, with no base object class, there's no way to create a function that
takes an object of any type as a parameter, which is incredibly limiting.
I could go on, but you get the general idea. C++'s object model is
completely broken on several fundamental levels.
You're making mountains out of mole hills. C++ template functions make
You then just define two versions of your function: A template version
that wraps arguments in an Identifier instance, and a version that
takes Identifier instances. This is an easy thing to deal with.
I'd hardly call duplicating all your functions and building wrapper objects
for everything "an easy thing to deal with." More like, "a big, bloated
hack to work around a missing language feature."
Post by Mason Wheeler
What can you do with a plain Object other than store it? C++ uses
generics for this, not polymorphism. How does Delphi do objects
differently?
From looking it up, Delphi's primary type of object (which uses the
'class' keyword) is always dynamically allocated, and doesn't use a
pointer syntax, like in Java.
That's correct.
However, it isn't garbage collected, so you wind up with inconsistent
code appearances.
What do you mean by that?
Fortunately, Delphi classes do seem to include destructors (though I
didn't read much on them last night), so that's a definite improvement
over Java.
Destructors are pretty straightforward. They work more or less like they
do in C++, except that raising an exception when inside a destructor
doesn't kill your program.
Post by Mason Wheeler
What can you do with a plain Object other than store it?
A handful of different uses come to mind immediately, but they could
probably all be classified broadly as some variant of "storing". But
there's also serialization. I can use reflection to turn any Delphi object
into a string, and then take that string and turn it back into an object
of arbitrary type. (Assuming the object's class is registered with the
serializer, of course.)

Accomplishing this in C++ is far more difficult, with no base object type,
no reflection and no class references.
Kenneth Bull
2011-02-23 18:59:44 UTC
Permalink
And your proposed alternatives
are all a lot more work to implement than simple inheritance, so what are the
benefits that make it worth the extra work?
Inheritance is not simple and vtables add a lot to the size of objects in C++.
Interfaces at least would not require extra memory when implemented,
and as defined in Go, I expect they are actually quite a bit simpler
to implement.

Not that I don't like inheritance, mind you. Conceptually at least.
Paulo Pinto
2011-02-23 19:06:13 UTC
Permalink
The main issue with interfaces in C++ is called the diamond problem.

Conceptually abstract classes can be used in C++ as interfaces are used
in other languages. The problem is that it requires the use of multiple
inheritance
and that opens the door to quite a few issues, one of them being the diamond
problem.

To solve it, you need to have access to the full source code of your class
hierarchy
and apply virtual inheritance in the right places.

This is why the use of interfaces does not work that well in C++, when
compared
with other languages.

--
Paulo
Post by Kenneth Bull
And your proposed alternatives
are all a lot more work to implement than simple inheritance, so what are
the
benefits that make it worth the extra work?
Inheritance is not simple and vtables add a lot to the size of objects in C++.
Interfaces at least would not require extra memory when implemented,
and as defined in Go, I expect they are actually quite a bit simpler
to implement.
Not that I don't like inheritance, mind you. Conceptually at least.
_______________________________________________
SDL mailing list
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Mason Wheeler
2011-02-23 19:06:50 UTC
Permalink
----- Original Message ----
Subject: Re: [SDL] text input and key down events
And your proposed alternatives
are all a lot more work to implement than simple inheritance, so what are the
benefits that make it worth the extra work?
Inheritance is not simple and vtables add a lot to the size of objects in C++.
I don't follow. As long as you're using single inheritance (not gonna touch MI
with
a 10-foot pole) the vtable pointer adds sizeof(pointer) to each object instance.
That's not much, unless you're working with a bunch of really tiny objects. The
vtables themselves can grow large if you have a lot of virtual methods, but
they're
static data; they exist per class, not per instance.

And what do you mean that "inheritance is not simple"? If your objects are all
reference types, I don't see anything complicated about it. (Now when your
objects are value types and you have to worry about copy constructors, that's a
different matter, but that brings me back to my original point that C++'s object
model is fundamentally broken.)
Loading...