An Interactive 3D Visualization for Content-Based Image Retrieval



Similar documents
ENHANCED WEB IMAGE RE-RANKING USING SEMANTIC SIGNATURES

Topic Maps Visualization

RFID Based 3D Indoor Navigation System Integrated with Smart Phones

Visualization & Layout for Personal Photo Libraries 1

So today we shall continue our discussion on the search engines and web crawlers. (Refer Slide Time: 01:02)

What is Visualization? Information Visualization An Overview. Information Visualization. Definitions

VRSPATIAL: DESIGNING SPATIAL MECHANISMS USING VIRTUAL REALITY

Open issues and research trends in Content-based Image Retrieval

Defense Technical Information Center Compilation Part Notice

Proposal for a Virtual 3D World Map

SEARCH ENGINE WITH PARALLEL PROCESSING AND INCREMENTAL K-MEANS FOR FAST SEARCH AND RETRIEVAL

Categorical Data Visualization and Clustering Using Subjective Factors

MusicGuide: Album Reviews on the Go Serdar Sali

The Scientific Data Mining Process

A World Wide Web Based Image Search Engine Using Text and Image Content Features

Public Online Data - The Importance of Colorful Query Trainers

Interactive Information Visualization of Trend Information

ANIMATION a system for animation scene and contents creation, retrieval and display

Remote Usability Evaluation of Mobile Web Applications

A Tool for creating online Anatomical Atlases

Approaches of Using a Word-Image Ontology and an Annotated Image Corpus as Intermedia for Cross-Language Image Retrieval

Visualization Techniques in Data Mining

Go to contents 18 3D Visualization of Building Services in Virtual Environment

Dynamical Clustering of Personalized Web Search Results

Java in Education. Choosing appropriate tool for creating multimedia is the first step in multimedia design

CLOUD DIGITISER 2014!

CREATING A 3D VISUALISATION OF YOUR PLANS IN PLANSXPRESS AND CORTONA VRML CLIENT

SketchUp Instructions


Content-Based Image Retrieval

MultiExperiment Viewer Quickstart Guide

Maya 2014 Basic Animation & The Graph Editor

Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report

3D Data Visualization / Casey Reas

Natural Language Querying for Content Based Image Retrieval System

Web Design (One Credit), Beginning with School Year

Mean-Shift Tracking with Random Sampling

Laser Gesture Recognition for Human Machine Interaction

Tudumi: Information Visualization System for Monitoring and Auditing Computer Logs

Creating an Intranet Website for Library & Information Services in an Organization

Module II: Multimedia Data Mining

Visualizing an Auto-Generated Topic Map

The Benefits of Statistical Visualization in an Immersive Environment

A THREE-TIERED WEB BASED EXPLORATION AND REPORTING TOOL FOR DATA MINING

ISSN: A Review: Image Retrieval Using Web Multimedia Mining

Saving Mobile Battery Over Cloud Using Image Processing

Feasibility Study of Searchable Image Encryption System of Streaming Service based on Cloud Computing Environment

Study of Large-Scale Data Visualization

CMA ROBOTICS ROBOT PROGRAMMING SYSTEMS COMPARISON

Interactive Data Mining and Visualization

Multimedia Databases. Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig

EHR CURATION FOR MEDICAL MINING

SIGNATURE VERIFICATION

The preliminary design of a wearable computer for supporting Construction Progress Monitoring

Florida International University - University of Miami TRECVID 2014

Methodology for Emulating Self Organizing Maps for Visualization of Large Datasets

Enriched Links: A Framework For Improving Web Navigation Using Pop-Up Views

SuperViz: An Interactive Visualization of Super-Peer P2P Network

Utilizing spatial information systems for non-spatial-data analysis

Recognition. Sanja Fidler CSC420: Intro to Image Understanding 1 / 28

The Value of Visualization 2

Clustering & Visualization

K-means Clustering Technique on Search Engine Dataset using Data Mining Tool

Kathy Au Billy Yi Fan Zhou Department of Electrical and Computer Engineering University of Toronto { kathy.au, billy.zhou }@utoronto.

CATIA: Navigating the CATIA V5 environment. D. CHABLAT / S. CARO Damien.Chablat@irccyn.ec-nantes.fr

Extend Table Lens for High-Dimensional Data Visualization and Classification Mining

Screen Capture A Vector Quantisation Approach

AN EFFICIENT APPROACH TO PERFORM PRE-PROCESSING

The Rocket Steam Locomotive - Animation

ZOINED RETAIL ANALYTICS. User Guide

Content-Based Image Visualization

Sara Florian

Graphical Web based Tool for Generating Query from Star Schema

Intro to 3D Animation Using Blender

Extracting, Storing And Viewing The Data From Dicom Files

GAZETRACKERrM: SOFTWARE DESIGNED TO FACILITATE EYE MOVEMENT ANALYSIS

ART Extension for Description, Indexing and Retrieval of 3D Objects

Web-based Multiuser Interior Design with Virtual Reality Technology

Oracle8i Spatial: Experiences with Extensible Databases

Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine

Footwear Print Retrieval System for Real Crime Scene Marks

NakeDB: Database Schema Visualization

VISUALIZATION APPROACH FOR SOFTWARE PROJECTS

3D Interactive Information Visualization: Guidelines from experience and analysis of applications

Predict Influencers in the Social Network

Search Result Optimization using Annotators

VISUALIZATION. Improving the Computer Forensic Analysis Process through

Understanding Data: A Comparison of Information Visualization Tools and Techniques

Context-aware Library Management System using Augmented Reality

Virtual CRASH 3.0 Staging a Car Crash

Comparing Tag Clouds, Term Histograms, and Term Lists for Enhancing Personalized Web Search

Exploiting Key Answers from Your Data Warehouse Using SAS Enterprise Reporter Software

9. Text & Documents. Visualizing and Searching Documents. Dr. Thorsten Büring, 20. Dezember 2007, Vorlesung Wintersemester 2007/08

Consuming Real Time Analytics and KPI powered by leveraging SAP Lumira and SAP Smart Business in Fiori SESSION CODE: 0611 Draft!!!

Contents WEKA Microsoft SQL Database

CONTENT-BASED IMAGE RETRIEVAL FOR ASSET MANAGEMENT BASED ON WEIGHTED FEATURE AND K-MEANS CLUSTERING

Business Value Reporting and Analytics

aloe-project.de White Paper ALOE White Paper - Martin Memmel

Solving Simultaneous Equations and Matrices

An Analysis on Density Based Clustering of Multi Dimensional Spatial Data

Transcription:

An Interactive 3D Visualization for Content-Based Image Retrieval Munehiro Nakazato and Thomas S. Huang Beckman Institute for Advanced Science and Technology University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA E-mail: {nakazato, huang}@ifp.uiuc.edu Abstract 3D visualization for Content-Based Images Retrieval (CBIR) eases integration of searching and browsing images in a large image database. While many researchers proposed visualization system for CBIR, most approaches display images based on a fixed set of image features. However, not all features are always equally important for users. Moreover, in most systems, all images in the database are displayed regardless of the user s interest. Displaying too many images consume the resources and may confuse the users. In this paper, we propose an interactive 3D visualization system for Content-based image retrieval named 3D MARS. In 3D MARS, only relevant images are displayed on projection-based immersive Virtual Reality system or desktop VR. Based on the users feedback, the system dynamically reorganize its visualization scheme. 3D MARS eases tedious task of searching images from a large set of images. In addition, the sphere display mode effectively visualize clusters in the image database. This will be a powerful analyzing tool for CBIR researchers. 1. Introduction Digital imaging became ubiquitous. People can take digital pictures by inexpensive digital cameras. We no longer have to worry about the price of films. Moreover, beautiful pictures can be easily downloaded from various web sites for free. In return, however, we are suffering from tedious tasks of searching and organizing a huge number of images. In the last century, various techniques of Content-based image retrieval (CBIR) were proposed [5][6][7][8]. While CBIR systems provided us with a smart way of searching images, they had a significant limitation. First, in traditional CBIR systems, the query results are ordered and displayed in a line (i.e. 1D) based on the weighted sum of the distance measures. Meanwhile, the image features consist of high-dimensional vectors of different image properties such as color, texture and structure. As a result, much information can be lost for visualization. This cause problems especially when the number of query examples is small. The system cannot tell which feature is the most important for a user. Consequently, the most important image may not appear in the early stage of query operations. One solution to this problem is to allow the user to adjust the query parameter as often used in other image retrieval systems [6]. In this approach, the user has to specify the weights of each feature. This process, however, is very tedious and difficult for novice users. Second, because the query result images are tiled in a monitor, only limited number of images can be displayed at the same time. It is painful for user to go back and forth in the browser by clicking Next and Previous buttons. In this paper, we propose a new visualization system for Content-based image retrieval named 3D MARS. In 3D MARS, images are displayed on a projection-based immersive Virtual Reality or non-immersive desktop VR. The three dimensional space can display more images than traditional CBIR systems at the same time. By giving different meaning to each axis, the user can simultaneously browse the retrieved images with respect to three different criteria. In addition, from the users feedback, the system incrementally improves image query and dynamically adapts visualization scheme by relevance feedback techniques [5][7][8]. Moreover, with Sphere Display Mode, the system provides a powerful analyzing tool for CBIR researchers. The rest of paper is organized as follows. In the next section, we describe the difference between text database visualization and image database visualization. In Section 3. a brief overview of previous approaches is presented. Then, the proposed system is described in the following

sections. Finally, the future work and conclusion are presented in Section 10. 2. Text Visualization vs. Image Visualization Many researchers have already proposed 3D information visualization for Text document database [1][2][3]. Why do we need another visualization scheme for image database? This is because there are significant differences between Text documents and Image documents with regard to visualization. First, in most Text document visualization systems, only the title and minimal information can be displayed at once. Otherwise, the display would be cluttered with texts. Meanwhile, it is difficult for user to judge the relevance only from the title of a document. In order to see more information such as the abstract or the contents of the documents, the user has to select one of the document and open up another display window [focus+context.] On the other hand, in image retrieval, the user need only image itself for his relevance judgement. This user judgement is instant and does not require additional display window. Hence, the system need to show only images themselves (and titles if necessary.) Therefore, image visualization is more suitable for fully immersive Virtual Reality systems such as CAVE. Second, in both text and image retrieval systems, documents are indexed in a high dimensional space. Thus, in order to display the documents in a 3D space, the dimensionality has to be reduced. Because the index of text retrieval is made of the occurrence and frequency of keywords, it is difficult to automatically group these components in a meaningful manner. Such a organization is usually domain specific and requires human operation. On the other hand, the feature vector of image retrieval systems can be grouped easily, for example, into color, texture and structure. Therefore, the feature space can be easily organized in a hierarchical manner for 3D visualization. In content-based image databases, however, there are significant semantic gaps [14] between indexed image features and the user s concept. Most image databases index the images into numerical features such as color moments and wavelet coefficient as described in Section 7. These features are not directly related to the user s concept. Even if two images are close each other in the high dimensional feature space, they do not necessarily look similar for the users. Therefore, in order to express the user s semantic concept with these low-level features, the weights of these feature components should be adjusted automatically. Relevance Feedback technique for image retrieval was introduced by Rui et al. [5] for this purpose. Meanwhile, in text databases, it is more likely that related documents have the same keywords and are located close to each other in the feature space. 3. Related Work Many researcher proposed 2D or 3D visualization systems for Content-based Image Retrieval [10][11][15] [16][17]. Virgilio [11] is a non-immersive VR environment for image retrieval. Their system is implemented on VRML. Thus, the location of the images are computed off-line and interactive query is not possible. Only system administrators can send a query to the system and the other users can only browse the resulting visualization. Hiroike et al. [10] also developed VR system for image retrieval. In their system, hundreds of images in the database are displayed in 3D space. According to the user feedback, these images are re-organized and form a cluster around the example images. In their system, all the images in the database are always displayed at the same time. Chen et al. [17] applied the Pathfinder Network Scaling technique [18] on image database. Pathfinder network creates links among the images so that each path represents a shortest path between images. In their system, mutually linked images are displayed in 3D VR space. Depending on pre-selected features, the network forms very different shapes. The number of images are fixed to 279. Several researchers applied Multidimensional Scaling (MDS) [13] to image visualization. Rubner et al. [15] used MDS for 2D visualization of images. Tian and Taylor [16] applied MDS to visualize 80 color texture images in 3D. They compared visualization result with different set of image features. However, because MDS is computationally expensive ( ON ( ) time,) it is not suitable for visualization of a large number of images. 4. 3D MARS: the Interactive Visualization for CBIR In most approaches described above, a set of image features has to be selected in advance. The problem is, however, that not all features are equally important for the users. For example, assume a user is looking for images of balls of any color. In this case, Color features are not very useful and should not be used for visualization. Inappropriate visualization can be misleading. Furthermore, the important sets of features are context dependent. Thus, the user has to change the feature set according to his current interest. This is very difficult task for novice users.

Furthermore, in most systems, all images in the database are displayed regardless of the users interest. Displaying too many images exhausts resources while it may be annoying for the users. To address these problems, we propose a new visualization system for Image database named 3D MARS. In 3D MARS, the system interactively adapts visualization scheme according to users request. The user of 3D MARS tells the system his interest by specifying example images (Query-by-Example.) Repeating this feedback loop, the system can incrementally optimize the display space. 5. User Navigation In 3D MARS, images are displayed in a projectionbased immersive VR or non-immersive desktop VR. In the immersive case, we use NCSA CAVE. The image space is projected on four walls (front, left, right, and floor) surrounding the user. With shutter glasses, the user can see a stereoscopic view of the world. The user interacts with the space by a wand. The user can freely walk around in CAVE. In the non-immersive case, the VR space is displayed on a CRT monitor. The user interacts with the system with a keyboard and a mouse. The user can ware shutter glasses for better VR experience. When the system starts, it displays a number of images aligned in front of the user like a gallery. As the user moves, the images rotate to face the user. These images are randomly chosen by the system. When the user touches one of images by the wand, the image is highlighted and the filename is displayed below it. By moving the wand (or mouse,) the image can be move to any position. The user can select an image as relevant (i.e. a query example) by pressing a wand/mouse button. More than one images can be selected. The selected images are displayed with red frames. In order to de-select an image, press the button again. The user can also specify an image as a negative example. The negative examples are displayed with blue frames. Moreover, the users can flythrough in the space by joystick. In order to prevent the user from getting lost, a virtual compass is provided on the floor. Three arrows of the compass always show X- axis, Y-axis, and Z-axis respectively (Figure 3.) When the user presses the QUERY button, the system retrieves and displays the most similar images from the image database (Figure 2.) The locations of the images are determined by the feature distance from the query images. The X-axis, Y-axis and Z-axis represent color, texture and structure of images respectively. The more similar an image is, the closer to the origin of the space it is located. If the user finds another relevant (or irreverent) image in the result set, he can select it as an additional relevant (or irrelevant) example and press the QUERY button again. By repeatedly picking up images, the query is improved incrementally and more images of the user s interest are clustered near the origin (Figure 3.) Figure 2. shows the result after a user select one red flower image as a positive example. Because only one example is specified, the system assume every feature is equally important. As a result, many different types of images are displayed. From this result, the user can give another feedback by selecting more red flower images. Figure 3. shows the resulting visualization after the user selected two red flower images as query examples. More pictures of flower are gathered around the origin. Here, the green arrow means the Color (X) axis, the blue arrow means the Texture (Y) axis, and the red arrow means Edge structure (Z) axis. Pictures of Red flower have very similar color features to the query examples but have different texture and structure. Therefore, they are displayed on Y-Z plane. Meanwhile, an image of white flower has different color features, but has the similar shape to the examples. Thus it is displayed on X-Y plane. For researchers of image retrieval systems, visualizing how query vector is formed and how images are clustered in the feature space are useful information to evaluate their algorithms. For this purpose, we have implemented Sphere Mode in our system (Figure 4.) In this mode, all the images are represented by spheres. Therefore, it is easier for the user to examine clusters in the VR space. The positive examples are displayed as red spheres, and the negative examples are displayed as blue spheres. By flying through the space in this mode, the researcher can examine how images are clustered from different view angles. For example, by looking down the floor from a higher position, the user can see how images are clustered with respect to color and structure (see Figure 5.) 6. System Overview 3D MARS is implemented as a client-server system. The system consists of Query Server and Visualization Engine (client) as shown below (Figure 1.) They are communicating via Hyper-Text Transfer Protocol (HTTP.) More than one clients can connect to the server simultaneously. Image features are extracted in advance and stored into the Meta-data database.

Image File Database Feature Extractor Meta-data Database Query Server Server (Sun Enterprise) Figure 1. The system architecture 7. Image Features Image features consists of thirty-four (34) dimensional numerical values from three groups of features: color, texture, and edge structure. These features are indexed and stored in the Meta-data database in advance. Color. For color features, HSV color space is used. We extract the first two moments (mean, and standard deviation) from each of HSV channels [4]. Therefore, the total number of color features is =. Texture. For texture, each image is applied into wavelet filter bank where the images are decomposed into 10 decorrelated subbands. For each sub-band, the standard deviation of the wavelet coefficients is extracted. Therefore, the total number of this feature is 10. Edge Structure. We used Water-Fill edge detector [9] to extract image structures. We first pass the original images through the edge detector to generate their corresponding edge maps. From the edge map, eighteen (18) elements are extracted from the edge maps. 8. Query Server HTTP HTTP Visualization Engine SGI Onyx The Query Server is implemented as an extension of MARS (Multimedia Analysis and Retrieval System) [5][7][8] for Information Visualization. The server maintains image files and their meta-data. When the server receives a request from a client, it computes the userselected images with images in the database. Then, the server sends back IDs of the k most similar images and their locations in 3D. 8.1. Total Ranking vs. Feature Ranking Immersive VR Client Visualization Engine SGI O2 Desktop VR CAVE In the normal MARS system [5], the ranking of the similar images are based on the combination of the all three features. The weights of each feature is computed from query examples. In early stage of user interaction loop, however, the user may specify only one example. In this case, the query server cannot tell which feature is important. Therefore, the system assumes every feature is equally important. As a result, an image is considered to be relevant only when every its feature is close to the query. This causes the searching to fall into a local minimum. To remedy this problem, we use two ranking strategy: Feature Ranking and Total Ranking. The Feature Ranking is a ranking with respect to only one group of the features. First, for each feature group i = { }, the system computes a query vector q i based on the positive examples specified by the user. Next, it compute feature distance d ni of each image n in the database as follows, d ni = w ik ( x nik q ik ) (1) k where x nik is k th component of i-th feature, q ik is k- th component of q i. The weight w ik is the inverse of the standard deviation of x nik ( n = N ), w ik = ------ (2) σ ik Then, the feature ranking is computed by comparing d ni ( n =, N ). In addition, the value d ni is also used for the location along the corresponding axis in the fixed axes described later. After the Feature Ranking is computed, the system combines each feature distance d ni into the total distance D n. The total distance of image n is a weighted sum of each d ni, D n = u T dn where dn = [ d n,, d ni]. I is the total number of feature groups. In our case, I is 3. The optimal solution of u = [ u, u I] is solved by Rui et al.[7] as follows, u i = I j = f j ---- f i N where f i = d, and N is the number of positive n = ni examples. This gives higher weight to that feature whose total distance is smaller. Which means that if the query examples are similar with respect to a feature, this feature gets higher weight. The complete discussion of the original MARS system is found in [5] and [7]. Finally, the Total Ranking is computed based on the total distances. The server sends back to the client IDs of the top K images in the feature ranking and the top K total images in the total ranking. (3) (4)

By the use of both Feature Ranking and Total Ranking, the system can return images even if only one of their feature is close to the query. These images are usually located at some distance in the 3D space. The feature ranking is important especially in the early stage of query process where the user does not have enough number of query example. They could be ignored in the traditional CBIR systems. 8.2. Implementation The server is implemented as a Java Servlet with Apache Web Server. It is written in C++ and Java. The server can simultaneously communicate with different types of client such as Java applet client [20]. It is running on a Sun Enterprise Server. Currently, 17,000 images and their feature vectors are stored. 9. Visualization Engine The Visualization Engine takes a request from the user, sends the request to the server and receives the result from the server. Then it visualizes the resulting images in VR space. In the immersive case, it displays a set of images on the four walls of CAVE, which is a projection-based Virtual Reality system. When the user pushes the QUERY button, it sends IDs of the selected (positive or negative) images to the server. The requests are sent as a GET command of HTTP. When the reply is returned, the client receives a list of IDs of the k most similar images and their locations. Next, it downloads all the relevant image files (such as JPEG files) from the image database. Finally, these images are displayed on the virtual space. The system can display arbitrary number of images on the VR space depending on the available resources (texture memory.) In our environment, 50 to 200 images are displayed. In Sphere mode, more data can be displayed simultaneously because the image textures do not have to be stored in the memory. This component is written in C++ with OpenGL and CAVE library. The immersive version of the visualization engine is running on a twelve-processor Silicon Graphics Onyx 2. Each wall of the CAVE is drawn by a dedicated processor. Loaded image data are shared on a share memory and accessed from these processors. For the desktop VR version, the system is running on SGI O2. 9.1. Projection Strategies In order to project the high dimensional feature space into 3D space, we take two different approaches: Static Axes and Dynamic Axes. Static Axes. In the static axes approach, the meanings of X, Y, and Z-axis are fixed to some extent. In our implementation, X-axis, Y-axis and Z-axis always mean the distance with respect to Color, Texture and Structure, respectively. The location of each image is determined by the weighted sum of corresponding feature distance computed in the Query Server as described in Eq. 1. Therefore, for each axis, the system automatically chooses appropriate combination of features from the corresponding feature group. Because the meanings of axes do not change for each interaction, the user can use the axes to obtain a context of image searching. This makes navigation in the VR space easier. The problem of static axes approach is that some axes (a group of features) may not give any useful information to the user. For example, if none of texture features are meaningful, Y-axis does not have any meaning. Dynamic Axes. In the dynamic axes, the meanings of the axes change for every interaction. The location of images are determined by projecting the full 34 dimensional feature vector into three dimensional space. Many techniques have been proposed for this purpose. Because our goal is to provide a fully interactive visualization, computationally expensive method such as MDS is not suitable. In stead, we use faster FastMap method developed by Faloutsos et al. [12] FastMap takes a distance matrix of points and recursively maps points into lower dimensional hyperplanes. FastMap requires only ONk ( ) computation, where N is the number of images and k is the desired dimension. First, we feed the raw feature vectors of the retrieved images (including query vector) and the feature weights (Eq. 2) into FastMap. Here, there is no distinction among color, texture and structure feature groups. They are combined into one 34 dimensional vectors. After FastMap projects image features into 3D, we translate the entire VR space so that the location of the query vector matches the origin of the space. This guarantees that the distance between an image and the origin always represents the degree of similarity. The advantage of this approach is that the system use only necessary information to discriminate the images. The disadvantage is that because the meanings of the directions are always changing, the user may be confused. 10. Conclusion and Future Work In this paper, we proposed a new interactive visualization system for Content-Based Image Retrieval, named 3D MARS. Compared with the traditional CBIR systems, more images can be displayed in 3D space. By

giving different meaning to each axis, the user can simultaneously browse the retrieved images with respect to three different criteria. In addition, by the use of the feature ranking, the system can display images that could be ignored in traditional CBIR systems. Furthermore, unlike other 3D image visualization systems, where mapping to 3D space is fixed, 3D MARS can interactively optimize the visualization scheme in response to the users feedback. With the Sphere display mode, the 3D MARS not only provides efficient image searching, but also gives CBIR researcher a powerful analyzing tool. By flying through the space, the user can analyze image clusters from different viewpoints. One limitation of our system is that the user has to find a query example for Query-by-Examples. from a random selection. The user has to repeat the random query until any interesting image is found. Chen et al.[19] proposed a technique to automatically generate a tree structure of image database. By following this hierarchy, the user can effectively browse images. Pecenovic et al. [21] integrated image browsing and query-by-example. In their system, images are organized into hierarchical structure by recursively clustering images by k-means algorithm. For every level, each node is represented by image that is the closest to the centroid of the cluster. From the browsing, the user can switch to query by example. If some images in the database have text annotation, these information can be used as a starting point of a query. We plan to integrate several browsing strategies into 3D MARS. 11. Acknowledgement This work was supported in part by National Science Foundation Grant CDA 96-24396. 12. References [1] Card, S. K., Macinlay, J. D. and Shneiderman, B., Readings in Information Visualization - Using Vision to Think, Morgan Kaufmann, 1999. [2] Wise, J.A. et al., Visualizing the non-visual: Spatial analysis and interaction with information from text documents, In Proceedings of the Information Visualization Symposium 95, pp. 51-58. IEEE Computer Society Press, 1995. [3] Hearst, M. A. and Karadi, C., Cat-a-cone: An interactive interface for specifying searches and viewing retrieval results using a large category hierarchy. In Proceeding of 20th Annual International ACM SIGIR Conference, Philadelphia, PA, 1997. [4] Sticker, M. and Orengo, M., Similarity of Color Images, In Proceedings of SPIE, Vol. 2420 (Storage and Retrieval of Image and Video Databases III), SPIE Press, Feb. 1995. [5] Rui, Y., Huang, T. S., Ortega, M. and Mehrotra, M., Relevance Feedback: A Power Tool for Interactive Content-Based Image Retrieval, In IEEE Transaction on Circuits and Video Technology, Vol. 8, No. 5, Sept. 1998 [6] Flickner, M. et al., Query by image and video content: The QBIC system, IEEE Computers, 1995. [7] Rui, Y. and Huang, T. S., Optimizing Learning in Image Retrieval, In Proceedings of IEEE CVPR, 2000. [8] Zhou, X. and Huang, T. S., A Generalized Relevance Feedback Scheme for Image Retrieval, In Proceedings of SPIE Vol. 4210: Internet Multimedia Management Systems, 6-7 November 2000, Boston, MA, USA. [9] Zhou, X. S. and Huang, T. S., Edge-based structural feature for content-base image retrieval, Pattern Recognition Letters, Special issue on Image and Video Indexing, 2000. [10] Hiroike, A. and Musha, Y., Visualization for Similarity- Based Image Retrieval Systems, IEEE Symposium on Visual Languages, 1999. [11] Massari, et al., Virgilio: a Non-Immersive VR System to Browse Multimedia Databases, In Proceedings of IEEE ICMCS 97, 1997. [12] Faloutsos, C. and Ling, K., FastMap: A Fast Algorithm for Indexing, Data-Mining and Visualization of Traditional and Multimedia Datasets, In Proceedings of ACM SIGMOD95, pp. 163-174, May 1995. [13] Kruskal J.B., and Wish, M., Multidimensional Scaling, SAGE publications, Beverly Hills, 1978. [14] Santini, S. and Jain, R., Integrated Browsing and Querying for Image Database, IEEE Multimedia, Vol. 7, No. 3, 2000, pp. 26-39. [15] Rubner, Y., Guibas, L. and Tomasi, C., The earth mover s distance, multi-dimensional scaling, and color-based image retrieval, In Proceedings of the ARPA Image Understanding Workshop, May 1997. [16] Tian, G.Y. and Taylor, D., Colour Image Retrieval Using Virtual Reality, In Proceedings of IEEE International Conference on Information Visualization (IV 00), 2000. [17] Chen, C., Gagaudakis, G. and Rosin, P., Content-Based Image Visualization, In Proceedings of IEEE International Conference on Information Visualization (IV 00), 2000. [18] Schvaneveldt, R.W., Durso, F.T. and Dearholt, D.W., Network structures in proximity data, In The Psychology of Learning and Motivation, 24, G. Bower, Ed., Academic Press, 1989, pp. 249-284. [19] Chen, J-Y., Bouman, C.A., and Dalton, J.C., Heretical Browsing and Search of Large Image Database, In IEEE Transaction on Image Processing, Vol. 9, No. 3, pp. 442-455, March 2000. [20] Nakazato, M. et al., UIUC Image Retrieval System for JAVA, available at http://chopin.ifp.uiuc.edu:8080. [21] Pecenovic, Z., Do, M-N., Vetterli, M. and Pu, P., Integrated Browsing and Searching of Large Image Collections, In Proceedings of Fourth International Conference on Visual Information Systems, November, 2000.

Figure 2. The result after the user selected one red flower picture (in Fixed axes mode.) The query example is displayed near the origin. Arrows are the Virtual compass, which specifies the directions of each axis. Some similar flower pictures as well as non similar images are displayed at a distance. The number of images is 50. Figure 4. The Sphere Mode. The number of images is 100. Figure 5. The sphere mode from a different view angle (from the zenith of the space.) Relationship between color and structure is visualized. Figure 3. The result after the user selected another flower images. Many flower images are gathered around the origin. Red flowers of different texture are aligned along the red arrow. A while flower is also displayed in a different position.