"Events" are usually brought up in the context of the user interface. One way to conceive human-computer interaction is view the computer as waiting for the user to "do something", and, based what the user has done, execute something. The phrase "do something" includes: move the mouse to some user interface element and click on it, or move keyboard focus to some element, and press a key. The acts of moving the mouse, clicking the mouse, and pressing keys are termed "events" herein. However, I wish to emphasize that events do not occur solely in the context of the user interface. That is where the discussion will focus primarily; but, events do happen outside of that context, and remain, nonetheless, events in their essential character.
Events are commonly described in two ways. These are, alternatively, "low-level" vs. "high-level", "external" vs. "internal", or "physical" vs. "semantic" or "functional". The dichotomy is intended to distinguish between events that are caused by a hardware device such as a mouse; hence, "physical", "external", and "low-level"; or a programmatic event, such as a window appearing on screen; hence "internal", "semantic", or "high-level". I prefer the low vs. high level nomenclature since it entails abstraction and classification of different physical events as being equivalent. That is, a low level event is a singular kind of thing, whereas a high level event is defined by abstracting over a number of different low level events and classifying them as the same thing.
Take, for example, the idea of "dialing" a telephone. This concept has its roots in old-fashioned rotary phones where one rotated a dial in order to place a call. Today, most telephones are touch-tone, and "dialing" is accomplished by pushing buttons. In addition, there are situations where no dial of any kind is involved: for example, when a modem makes a call by generating a sequence of tones. In all three cases, we refer to the event as "dialing" the number. What is going on here is the act of placing the call has become the central concept, and the actual motions used to perform that act are less important or irrelevant. Another way of putting this is to say that rotating a dial, pushing buttons, or generating tones are all distinct low level events that form an equivalence class. That equivalence class is called "dialing a number" and it is formed by abstracting away from the minute details of performing the act, and focussing on what the low level events have in common (they all accomplish the act of dialing a number).
The same can be said for computer user interface events. Clicking on a button, moving keyboard focus to that button and hitting return, or even using speech recognition and speaking the button's label can all be said to amount to the same thing; namely, "pushing" the button 1. The description, "pushing the button" is a high-level one, whereas, "clicking on it" and "hitting return", refer to the low-level events. High-level events are close to what is intended by users when they interact with an application, whereas low-level events are closer to the way of implementing that desired function.
This document discusses the event model, or event system, that was introduced with the Java develeopment kit (JDK) version 1.1. The older system, while still supported for backward compatibility, is deprecated, and developers are encouraged to use the new system.
Interestingly enough, the impetus for the event system came not from the user interface, but from Java Beans. To put it briefly, one can view an application as being made up of a number of simpler pieces -- these parts are referred to as beans. The beans realize a larger application by interconnecting themselves in some reasonable way, and then passing information among themselves. The way this is accomplished (leaving out a lot of details) is as follows. First, a specific bean has a set of properties that change over time. Secondly, when one of its properties changes, a bean broadcasts this event to any other object that is listening for that change. When these listeners hear of the change, they then cause changes in themselves or other beans. The result is that the group of beans cooperatively realize the functionality of some larger application.
The three main aspects of this event system are event sources, events, and event listeners. It is the sources that broadcast or emit specific events, and interested listeners that handle those events. To make this more concrete, and to bring it back to the graphic user interface (GUI), consider a button widget. The button is an event source that notifies its handlers (listeners) when pushed.
An additional feature of the system is that the listeners can be attached and removed from a source dynamically, such that the result of interacting with that source can change over time. The source itself does not change -- the button remains a button, and the events it emits are also fixed. But, the reaction to the events emitted is highly plastic. In this sense, the control of a GUI widget is said to be "pluggable", since one can plug and unplug a number of different listeners into a GUI widget. In fact, as a programmer, you concentrate on implementing various listener objects, and attaching/removing them to various event sources as needed. You rarely create new event types, nor event sources.
The base of the GUI toolkit for Java is represented by the Abstract Window Toolkit, or "AWT", and this section discusses the event model in the context of the AWT. However, much that appears here applies equally well to the newer GUI toolkit, called "Swing". (It had better, since Swing is derived from and parasitic on many aspects of the AWT).
To reiterate, at the level of a GUI widget, one must keep in mind the three main aspects of the model: an event source (the widget itself), events that it emits, and event listeners. These are three tightly coupled, albeit independent, aspects of the system. There is a stronger association between events and listeners in that the type of event pretty much defines the type of listener. Still, the architecture of the AWT (and Swing) associate event types with widget types, although the connection is weaker in this case. Hopefully, this will become clearer as you read along.
The discussion will concentrate on high-level events as there are more of them than there are low level events. But, before even discussing them in any detail, it is important to say something about what an event is; or, what do all events have in common?
At its base, an event does nothing more than record its source, and publicly offer that information. What is meant by "source"? The source of the event is the Java object that is broadcasting the event. This is perhaps one of the concepts with the Java event model that can trip you up -- confusion as to what "source" means, and that there must be one. For example, consider a "mouse down" event. The event source is not the mouse; rather it is the object clicked on. Furthermore, at the level of Java, one is guaranteed to get an answer to the question, "what java object is your source?" asked of the "mouse down" event. This is very different from other event models 2. On the Macintosh, for example, a mouse down event contains information such as the location of the mouse cursor at the time of the event, and the time of its occurrence; but says nothing about the object under the mouse at that time. In Java, it is quite impossible to not know what object the event is associated with. The event must have a Java object as its source.
Note at this basic level, the Java event class is not associated with the user interface. That makes sense, since the design of the event model is meant to be used in contexts other than the user interface. That is, here is a basic definition of event that anything can use, including a user interface. A user interface event is simply a specialized case of this basic event class.
Moving to the more specialized case of a user interface event, what is added to the event class? To be more concrete, what is an AWT event? In addition to the source, the AWT event records the type of event, provides a means whereby the event can be marked as "consumed", and a means by which client objects can inquire whether the event has been consumed.
The "type" of AWT event includes things like "hidden", "shown", "moved", "resized", and so on. The AWTEvent class itself does not define these; it is up to derived event classes to fill in this information. For example, the FocusEvent class will offer a value of either "focus gained" or "focus lost" as the event's type. One way to think of these is as sub-types. The main type of a FocusEvent is a "focus" event (really!), and it has two sub-types: "gained" or "lost".
The "consumed" property of AWT events is meant to indicate whether the event has been handled, and whether further event handling should be undertaken. Any event handler can, at its option, mark the event as consumed to stop any further handling of that event.
Since all AWT events are events, they must have an event source. Other than that, there is nothing more to an AWT event. The other user interface event classes are used to define specific event types, and it is to these we turn.
In Java, the basic type of GUI object, or widget, is termed a "Component". It is the class from which all other widgets are derived including buttons, check boxes, windows, menus, and so on. Since it represents the basic features of all other widgets, the kinds of events it emits form a common base for the more specific widget types. As the widgets themselves become more defined -- it's not just a component, but a list component -- more specialized events are defined.
To understand the Component events better, think about what one can do with a Component, without worrying about the Component's type. Since we are talking about a graphic user interface, a Component must occupy some area of the screen. Thus, a Component must have a position and size, at least. Occasionally it is useful to temporarily hide a Component without disposing of it, so a Component can change its visibility over time. In addition, since a mouse and keyboard are a common means of accessing components in a GUI, there are a set of events that have to do with the mouse and keyboard. This is explained in more detail in the next four sections.
As noted above, there is a triumvirate of objects to consider when dealing with events. First is the source, second is the event itself, and third is the listener that handles the event. With respect to Component events, the source is the Component, the event main type is ComponenEvent, and the listener is ComponentListener. By examining the methods of ComponentListener, one can see the kinds of events that Components broadcast. Note that in the case of ComponentEvents, by the time the listener has been notified of the event, it has already taken place. Strictly speaking, it is somewhat inaccurate to call the listener a "handler", since the event has been already dealt with by the underlying system. Instead, one can look upon the listener as a means of relaying what has just happened.
setVisible()
method . If this is
called with the argument "false", then the component will be made
invisible, and emit a "hidden" event.setVisible()
is called with a "true"
argument, the component will be revealed, and emit a "shown" event.Note that the "hidden" and "shown" events say nothing about whether the Component is obscured by another Component. Thus, a button in one window can be made visible/invisible and emit "shown"/"hidden" events even if another window lies in front of it. In that case, the user cannot actually see the button in question as it changes its visibility property.
In a GUI, the usual way of interacting with a Component is with the mouse; however, Components can be activated also via a keypress. In order for that to happen, the Component in question must have keyboard focus. Keyboard focus can move from Component to Component over time, and, at any given time, only one Component can have focus. Thus, by listening for FocusEvent's, one can determine where focus is currently, and where it it going.
There are only two event sub-types with respect to focus, namely, "focus gained" and "focus lost". Both are emitted when focus moves -- the Component that is losing focus emits the "focus lost" event just after it loses focus, and the Component that subsequently gains focus emits a "focus gained" event just after it receives it. And, these events occur in that order. Thus, if you are interested in focus events on a Components, or a number of Components, you implement a FocusListener(s) and attach it/them to the Components of interest.
All Components have a requestFocus()
method, whereby they can
ask to have focus moved to them, although some Components are considered "unfocusable",
and in these instances this method does nothing. An example of such a Component
is a label, or static text. The rationale is that since one cannot do anything
with a label -- clicking on it does nothing; and since it is static, the user
cannot edit it -- allowing keyboard focus to move to it is useless. (Note,
however, from an accessibility point of view there is something very similar to
focus traversal, namely interface navigation, where it is useful to allow users
to move their psychological focus to anything in the user interface.
Thus, while a label is inert, it is still useful to have something like the
ability to move focus to it on ocassion).
Keyboard interaction is defined at the level of Component. That is, once a Component has keyboard focus, then that Component will broadcast KeyEvents, and these can be handled by implementing KeyListeners. There are three sub-types of KeyEvent:
getKeyCode()
, to retrieve which key was depressed. In this case,
the key can be any key whether that key represents something printable or not.
The latter would include function keys, for example. In other words, the
information here is fairly low-level in that the system is informing the program
which key on the keyboard was pressed.getKeyChar()
,
that returns the Unicode value of the key typed. There are some quirky aspects
to this. First, if one uses the getKeyCode()
method for a "key-typed"
event, it will return "undefined" -- one uses getKeyCode()
for "pressed" and "released" events only, and getKeyChar()
for "typed" events. Secondly, for multi-byte languages, such as
Chinese, the "key-typed" event will not occur until all the bytes that
make up a single character have been entered. Thus, there can be a series of "pressed"
and "released" KeyEvents before there is a single "typed"
event. In this way the "key-typed" event is a relatively higher level
event encapsulating a sequence of key presses/releases that encode the character
typed.Finally, mouse events are broadcast at the basic Component level. While there is but one main MouseEvent type, there are two kinds of listeners. The MouseListener is concerned with mouse clicks, and detecting when the mouse cursor moves among various Components. The MouseMotionListener is what is used to handle mouse movements within a Component. The following carves up the mouse event sub-types by listener. First, the MouseListener events:
getClickCount()
) whose
purpose is to inform whether it was a single, double, or triple click. Thus,
like the "key-typed" KeyEvent, the "mouse-clicked" event is
somewhat higher level than the "pressed" and "released"
events. In brief, these are the main types of events for all Components: ComponentEvents, FocusEvents, KeyEvents, and MouseEvents. Since Java is an object-oriented language, all widgets derived from Component inherit the abililty to broadcast these types of events. Thus, for example, a radio button, by dint of being a Component, is a ComponentEvent, FocusEvent, KeyEvent, and MouseEvent source.
A programmer might be led to the conclusion that this is all that is needed and can, for example, handle mouse interaction with a radio button by implementing a MouseListener, and adding that listener to the radio button. Now while this is possible, it is not advisable, and there is a better way of handling mouse clicks on a radio button. Broadly speaking, for each specific Component type, there is a corresponding type of event that is more meaningful than a simple mouse click. That is, if there is a rationale for having a specific kind of widget in one's user interface toolkit, then there may as well be an equally specific kind of event that represents interactions with that widget. In short, the kinds of events a widget broadcasts are keyed to that widget's raison d'être. To continue with the example of radio button, they typically appear in a group in order to allow a choice among a set of mutually exclusive possibilities. And, usually what the programmer wants to know is which radio button is selected, not specifically how it was selected. In Java, when a user interacts with a radio button using either the mouse or keyboard, the radio button emits an ItemEvent, which in turn indicates whether the radio button has been selected or deselected. In other words, to handle radio buttons, it is better to simply ignore key strokes and mouse button presses, and attend to whether the button was selected or deselected. It does not matter how it came to be selected; only that it was selected.
The next sections illustrate this further, first with respect to Components that can contain other Components, and then with some of the standard repertoire of GUI widgets such as, menus, push buttons, and check boxes.
A Container is a Component whose purpose is to hold other Components within its bounds, and handle their layout. Again, since we are dealing with a GUI, the containment relationship is a visual one. One of the implications is that a Component can have only one direct Container parent. For example, a push button (a Component) cannot belong to two different windows (a specific kind of Container), nor even to two different button groupings within the same window.
At this basic level of visual container, the ContainerEvent has only two
event sub-types, namely whether a component has been added to the container or
removed from it. So, when a Component is added to a Container, that Container
broadcasts a "component added" event; similary, when a Component is
removed from a Container, the Container emits a "component removed"
event. Note that it does not matter how the Component was added or removed --
it could have been dragged into the Container using a mouse, or it could have
happened programmatically. Indeed, two methods of Container are add()
and remove()
specifically for adding and removal or Components
from the Container.
A special kind of container is a window. And, windows themselves can be specialized further: ones with or without a title bar, with or without a resize widget, or a dialogue type of window. Regardless of the specific purpose of the window, all emit WindowEvents, and there are five sub-types:
show()
method.show()
or toFront()
methods.Up until this point, there has been a one-to-one correspondence between a Component type, and the types of events it broadcasts. Components emit ComponentEvents, Containers are ContainerEvent sources, and Windows emit WindowEvents. To put it another way, Components that are not Windows are not WindowEvent sources. With respect to the ActionEvent (and other high level event types), this correspondence does not hold in that a number of different Component types are ActionEvent sources.
One ActionEvent source is the push button. Examples of this GUI element include the "Ok" and "Cancel" buttons that appear in dialogues. Under the Java event system, when users push the button, either by clicking on it, or using a keystroke to activate it, the button emits an ActionEvent. The corresponding listener is called an ActionListener, and there is only one sub-type of action event:
doClick()
method which effectively
causes any object derived from AbstractButton to emit an ActionEvent).Menu items are another ActionEvent source. By "menu item" is meant a plain item that does not lead to another sub-menu. Consider that menus are populated by items that can be checked or unchecked like check boxes, groups of items that can be selected like radio buttons, heirarchical menus, and, finally, plain vanilla items. It is the latter that is the topic of discussion here. Examples include file operation such as "Open...", "Close", and "Save"; and edit items such as "Cut", "Copy", and "Paste". Again, like push buttons, these menu items have a "do it" semantics: "save the file", or "copy the selection", and so on, and are appropriately conceived of as ActionEvent sources.
In point of fact, in Swing, the various kinds of menu items are derived, ultimately, from the AbstractButton class, and are all therefore technically ActionEvent sources. (For that matter, menu's themselves are derived from AbstractButton). However, it is not always appropriate to handle the selection of, say, a checkable menu item via an ActionListener since all that the ActionEvent says is that the checkable menu item was selected. It does not indicate what its current checked state is. For that information, one needs a different type of event, to which the discussion will now turn.
A number of GUI widgets have a dual state nature in that they can either be "on" or "off". Another way of saying this is that they are either "selected" or "not selected". Toggle buttons are widgets that mimic a button that can be pushed in and later released -- a real world example is the "pause" button on a VCR or CD player. In a GUI, toggle buttons are typically found in a formatting toolbar of a word processor where they indicate the style of the text including bold, italic, underlined, and so on. The toggle can be pushed in to make that style take effect, and released to ignore that style.
Check boxes are labelled boxes that have some indicator within the box to indicate whether the box is checked or not. Both check boxes and toggle buttons may or may not occur in groups, but, when they do, any number of them can be simultaneously selected.
Radio buttons always occur in groups, and indicate one of a number of selections. The typical rendering of them is as a labelled circle which can be filled to indicate selection, and is empty to show that that choice is not selected. Both radio buttons and check boxes can also appear within a menu as checkable menu items.
To respond to the selection of toggle buttons, check boxes, and radio buttons, programmers are advised to implement an ItemListener and listen for ItemEvents. ItemEvents have two sub-types:
Recall that, ultimately, all of these types of buttons are derived from
AbstractButton, and inherit its doClick()
method. For these
buttons, doClick()
changes the current selected state of the
button, and then causes it to emit an ItemEvent of the appropriate sub-type.
Calling doClick()
will simultaneously cause the broadcast of an
ActionEvent. In addition, these buttons have a method setSelected()
which allows one to programmatically change their state and cause the source to
emit an ItemEvent, but not an ActionEvent.
While on the topic of selection events, consider the AWT list Component. Its purpose is to display a list of choices, and can be configured to allow the selection of either a single item or multiple items. As the user makes or modifies their selection(s), the list emits ItemEvents that indicate how the selection has changed. One way that the user changes their selection is by single clicking on list items. In addition, some lists allow double clicking on a list item to mean "do something" with this item. An example is a file-open dialogue, where double-clicking on a file in the list view is taken to mean "open this file". In this case, not only does the list emit an ItemEvent, but also an ActionEvent, and the corresponding ActionListener's job, in this example, is to implement the opening of that file. Thus, an AWT list is profitably viewed as both an ItemEvent source for selections, and an ActionEvent source for doing something with the selected item(s). (Note: Swing's list widget is more sophisticated, and as a result, has its own specialized event termed ListSelectionEvent. Swing lists, unlike AWT lists, are not ItemEvent sources, but are ListSelectionEvent sources).
There are numerous other Components in the AWT and Swing toolkits. These include tool tip, label, slider, progress bar, tabbed pane, combo box, list, tree, table, tool bar, menu bar, alert dialogues, file choosing dialogues, colour choosing dialogues, and a variety of text based widgets. Each makes use of the types of events described above as appropriate; but, in addition, each type of widget defines a new event type in order to handle the kinds of interactions that the widget was designed for. Consider this somewhat esoteric example: There is a HyperlinkEvent type in the Swing text package for dealing with user/hyperlink interactions. The so-called JEditorPane widget broadcasts a HyperLink event under the following circumstances:
In all cases, the URL of the link can be acquired from the HyperLinkEvent by
calling its getURL()
method, and the program can use that
information to determine, say, the protocol of the link, be it "http:",
"file:", "gopher:", "mailto:", or some other
protocol. Furthermore, by implementing a HyperLinkListener, the programmer can
handle the event in some fashion. Examples include displaying the link in a
status bar when the user arms a link, or popping up an email edit window when
the user activates a "mailto:" link.
The Java event system is relatively sophisticated, and designed to allow programmers to handle everything from low level mouse clicks and key presses to higher level user interaction such as activating hyperlinks in HTML documents. In addition, given the object oriented nature of Java, the event system is extensible and allows the derivation of new user interface elements for as yet unthought-of user actions, and to define new events types and event handlers for those actions.
Another effect of the object oriented nature of Java is that GUI elements
are themselves objects. These objects have methods for "activating"
their functionality, programmatically, as it were. This is a Good Thing
TM from the point of view of adaptive interfaces since
it allows access to the underlying function of a GUI widget without having to
manipulate it with a mouse and keyboard. Thus, if you want to add items to a
list's selection, call its addSelectionInterval()
method with the
appropriate list item indices. You do not have to somehow simulate a mouse
click, or keystroke, or combination of the same to change the selection. "All"
you need do is detect some other user gesture, and connect it to the requisite
method.
Furthermore, the system is rigged such that if you do manipulate the object using a method, instead of, say, a mouse, then the events that would fire normally as a result of the mouse click will continue to fire. The event listeners, with specific exceptions, will all be notified even if you do not use the mouse. The specific exceptions are the low level mouse and keyboard event listeners since they won't (can't) be notified if the mouse/keyboard is not used. But then, in a sense, they are irrelevant for the most part. You normally do not implement mouse/keyboard event handlers for Java GUI widgets; instead you implement the higher level event handlers -- action handlers for buttons, selection handlers for check boxes, hyperlink handlers for hyperlink events, and so on. It is only under rare conditions that you care about low level events. One example is when it is important to emulate a user in automating quality assurance tests of software. In that case, you do want to simulate low level events; and you can either create, "by hand", the low level events and dispatch them, or, in Java 1.3, make use of the Robot class. But that's a topic for another time.
In brief and in general, if you want to activate the functionality of GUI widget in Java, search for the appropriate method, and do not worry about simulating events. And, if your purpose is to handle an event emitted by a widget, determine the most appropriate high level event it emits, and implement the listener for that.
Copyright © 2000 Adaptive Technology Resource Centre, University of Toronto.
Verbatim copying and distribution of this entire article is permitted in any medium, provided this notice is preserved.
Updated: 04 Nov 2012
Web site maintained by Joseph Scheuhammer