Shape-driven Coordinate Ordering for Star Glyph Sets via Reinforcement Learning

IEEE Transactions on Visualization and Computer Graphics 2021

Ruizhen Hu1    Bin Chen1    Juzhan Xu1    Oliver van Kaick2    Oliver Deussen3    Hui Huang1*

1Shenzhen University    2Carleton University    3University of Konstanz

Fig. 1. Ordering axes for a set of star glyphs using reinforcement learning. For the same set of high-dimensional data points, we can optimize the axis order of the corresponding star glyphs according to different perceptual criteria: (a) spike and salient shape strategies that place dissimilar axes close-by; (b) maximizing class separability. The class labels of the glyphs are indicated by blue/red color. Star glyphs of the same data point (but with different axis orders) are drawn at the same position inside each plot for comparison.


We present a neural optimization model trained with reinforcement learning to solve the coordinate ordering problem for sets of star glyphs. Given a set of star glyphs associated to multiple class labels, we propose to use shape context descriptors to measure the perceptual distance between pairs of glyphs, and use the derived silhouette coefficient to measure the perception of class separability within the entire set. To find the optimal coordinate order for the given set, we train a neural network using reinforcement learning to reward orderings with high silhouette coefficients. The network consists of an encoder and a decoder with an attention mechanism. The encoder employs a recurrent neural network (RNN) to encode input shape and class information, while the decoder together with the attention mechanism employs another RNN to output a sequence with the new coordinate order. In addition, we introduce a neural network to efficiently estimate the similarity between shape context descriptors, which allows to speed up the computation of silhouette coefficients and thus the training of the axis ordering network. Two user studies demonstrate that the orders provided by our method are preferred by users for perceiving class separation. We tested our model on different settings to show its robustness and generalization abilities and demonstrate that it allows to order input sets with unseen data size, data dimension, or number of classes. We also demonstrate that our model can be adapted to coordinate ordering of other types of plots such as RadViz by replacing the proposed shape-aware silhouette coefficient with the corresponding quality metric to guide network training.

Fig. 2. Overview of our method, illustrated using a data set of four 8D data points with 2 class labels indicated by red/blue colors. Given a set of data points with associated coordinate ordering (a), we first accumulate the data information along each coordinate (b), and then feed this information in sequences to the ordering network (c) to get an optimized coordinate order (d). The optimization of the network is guided by the shape-driven class separation measure, i.e., the silhouette coefficient based on the shape distance given by the shape context, computed on the resulting set of star glyphs with the new coordinate order (e).

Fig. 3. Shape context computation: (a) We sample the contour of the star glyph; (b) For each sample pointpt, we compute the relative position of all the other sample points; (c) We count the number of points falling into each bin of a log-polar grid centered atpt, to produce a spatial histogram; (d) The histogram is stored as a two-dimensional matrix.

Fig. 5. Architecture of our neural network for coordinate order optimization. The network takes as input a set of point coordinates and cluster assignments {xi}, and outputs an optimized coordinate sequence {yi}. Please refer to the text for details on the components of the architecture.

Fig. 6. Shape context distance prediction network, which estimates the scalar distance between two shape context descriptors s1 and s2, where “C” denotes concatenation and “FC” is a fully-connected network.

Fig. 8. Statistical results of grouping quality in the user study on the class separation measure, where we compare the user performance on star glyph grouping when the glyphs are given in different coordinate orders based on low and high silhouette coefficients and salient features. We see that users can tell the perceptual difference between different glyphs more clearly (higher grouping quality) for sets ordered with a high SC.

Fig. 10. Comparison to baseline method: two examples of synthetic data on the left, and one example of real data on the right. In all examples, glyphs at the same position represent the same data point but are drawn with different coordinate orders. Glyph labels are shown by two different colors, and the numbers under each set are the silhouette coefficients (SCs) of the orderings. Glyphs from different classes look quite similar when using the input random coordinate order. While the baseline method improves this difference, our ordering provides the best contrast between classes.

Data & Code

Note that the DATA and CODE are free for Research and Education Use ONLY. 

Please cite our paper (add the bibtex below) if you use any part of our ALGORITHM, CODE, DATA or RESULTS in any publication.



We thank the anonymous reviewers for their valuable comments. This work was supported in part by NSFC (61872250, 61861130365, 61761146002), GD Talent Program (2019JC05X328), GD Science and Technology Program (2020A0505100064, 2018KZDXM058), LHTD (20170003), NSERC Canada (611370, 293127), gift funds from Adobe, a Google Faculty Research Award, National Engineering Laboratory for Big Data System Computing Technology, and Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ).



title={Shape-driven Coordinate Ordering for Star Glyph Sets via Reinforcement Learning},

author={Hu, Ruizhen and Chen, Bin and Xu, Juzhan and Van Kaick, Oliver and Deussen, Oliver and Huang, Hui},

journal={IEEE Transactions on Visualization and Computer Graphics},



Downloads (faster for people in China)

Downloads (faster for people in other places)