Frappé: Querying the Linux Kernel Dependency Graph!
|
|
|
- Rosa Mitchell
- 10 years ago
- Views:
Transcription
1 Frappé: Querying the Linux Kernel Dependency Graph! Nathan Hawes Oracle Labs Ben Barham Oracle Labs Cristina Cifuentes Oracle Labs ABSTRACT Frappé is a developer tool for querying and visualizing the dependencies of large C/C++ software systems to the order of 10s of millions of lines of code in size. It supports developers with a range of code comprehension queries such as Does function X or something it calls write to global variable Y? and How much code could be affected if I change this macro? Results are overlaid on a visualization of the dependency graph data based on a cartographic map metaphor. In this paper, we give a brief overview of Frappé and describe our experiences implementing it on top of the Neo4j graph database. We detail the graph model used by Frappé and outline its key use cases using representative queries and their runtimes with the dependency graph data of the Unbreakable Enterprise Kernel. Finally, we discuss some of the open challenges in supporting source code queries across single and multiple versions of an evolving codebase with current property graph database technologies: performance, efficient storage, and the expressivity of the graph querying language given a graph model. Categories and Subject Descriptors D.2.3 [Software Engineering]: Coding Tools and Techniques program editors. D.2.5 [Software Engineering]: Testing and Debugging Debugging aids, Testing tools. D.2.6 [Software Engineering]: Programming Environments Integrated environments. General Terms Algorithms, Experimentation, Languages. Keywords Source code querying tools, Graph databases, C/C INTRODUCTION Whether identifying architectural issues or simply locating the underlying definition of a symbol, source code querying tools are becoming increasingly important as code bases grow into the 10s of millions of lines of code [1]. This is particularly true for C/C++ source code of this magnitude typically systems code where the preprocessor, complex language features, custom build systems, and large quantities of legacy code further complicate such tasks and reduce the effectiveness of tooling designed to support them. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. GRADES'15, May 31 - June , Melbourne, VIC, Australia 2015 ACM. ISBN /15/05 $15.00 DOI: Figure 1. Components of a source code querying system Source code querying systems typically comprise four elements, as seen in Figure 1: an extractor to pull in the desired system model from the source code, a repository to store that data, an interface that allows the user to specify queries and view their results, and a query processor that handles queries by examining the repository [2]. The simplest code querying systems are textsearch based, where the source code files themselves serve as the data repository, regular expressions and lists of source locations as the interface, and tools such as grep as the query processor. More sophisticated and precise solutions typically hook into or partially re-implement the compilation and linking process to extract structured data into a separate repository. End users typically query via predefined commands in an Integrated Development Environment (IDE); e.g. go to definition, find references, remove unused imports. Results are most often displayed as a list of textual results with hyperlinks to the relevant source code. While the solutions employed in many IDEs have greater precision due to their more complete understanding of the underlying language (i.e. types, scoping, and linking information), text editors like vim and emacs and simpler code querying tools like grep are still widely used in the C/C++ community. This is largely due to impracticality of integration into large, complex build systems in some cases, as well as spurious errors owing to the custom parser used by most IDEs (Visual Studio and XCode being exceptions). The purely textual presentation of query results used both in simple text-based tools and IDEs also places a large cognitive load on the user, as many code queries can produce many results in large-scale systems and each result is often structured (e.g. when searching for dependency cycles or paths from an entry point to a function of interest). Frappé is a C/C++ source code querying tool that aims to support large-scale code bases. Our main focus has been on the extractor, to obtain precise dependency information (including cross-linking of information), and the interface, to present large, structured result sets effectively [3]. This paper focuses on our experiences using a graph database as the repository and query processor: Neo4j [4] and its Cypher query language [5]. The paper explains use cases used in the source code querying community, and explains which use cases are solved well using relational or graph databases and their query languages. It also poses some challenges to the graph database community in order to support a
2 richer, more interesting set of use cases, namely, being able to efficiently store data for multiple versions of a large C/C++ codebase as it evolves, and to specify succinct, performant queries within and across these versions. 2. FRAPPÉ OVERVIEW With Frappé, we aim to address the above issues by providing an easy-to-integrate, precise C/C++ source code querying system that scales both in terms of performance and presentation. Taking into account the four components of a source code querying system, only the extractor and interface components need to be tailored for source code querying. Hence, our major focus to date has been on the extractor and interface. For the extractor, integration with custom builds is made easy using the same approach as the Oracle Parfait bug-checking tool [6] by providing wrapper scripts that serve as drop-in replacements for the most common compilers (e.g. gcc, icc, cc, Clang) and compilation tools. These scripts still execute the native compiler they wrap, but also run a modified version of the complete Clang compiler (rather than a custom parser) to capture precise information on the various source entities and dependencies in each compilation unit. For the interface, Frappé uses the captured dependency information to generate a zoomable 2D spatial visualization of the code that employs a cartographic map metaphor such that the continent/country/state/city hierarchy of the map corresponds to the equivalent in source code: the high-level architectural components down to the individual files and functions [3,7]. Overlaying query results on this map be they individual source entities, paths through the code, or transitive closures gives an immediate general impression of the location, locality, structure, and quantity of results, which helps in perceptually filtering out irrelevant results. We looked to existing database management systems to fill the role of the repository, query processor and query language for specifying custom queries in the interface. Relational DBMSs coupled with SQL would work well for some of the simpler use cases Frappé targets (covered in Section 4), but many common source code queries involve transitive closure or reachability computations. Specifying these in SQL can be difficult and results in verbose recursive queries that, when backed by a relational DBMS and large data set, often suffer performance issues due to repeated join operations. For the implementation of Frappé, we instead chose to investigate a graph database, Neo4j, avoiding joins altogether. Its property graph model is a good conceptual fit for querying source code: many core program concepts are graph-based or graph-like (e.g. call graphs, type hierarchies, data-flow graphs, or control-flow graphs) and the most widely employed visual representation for software is the node-link diagram (e.g. to display module dependencies, call graphs, flow charts or UML). As a result, graph-based query languages, like Neo4j s Cypher, seem natural in this domain. Urma et al. also recently showed through their tool Wiggle [8], that Neo4j and Cypher were able to succinctly express source code queries for Java and scale to codebases of ~3 million lines of code. 3. GRAPH MODEL The graph model used by Frappé synthesizes information from the preprocessor, abstract syntax tree, directory structure and linker in order to serve a variety of use cases, outlined in Section 4. Nodes foo.h int bar(int); foo.c #include foo.h int bar(int input) { return input; } main.c #include foo.h int main(int argc, char **argv) { return bar(argc); } compile gcc foo.c -c -o foo.o gcc main.c foo.o -o prog Figure 2. Example dependency graph in the graph represent a range of entities from symbol definitions and declarations to macro definitions, source files, directories, and modules. Edges represent the directed associations between these entities. The top of Figure 2 shows a small example program and its corresponding dependency graph for illustration purposes. The main function in file main.c makes use of function bar, defined in file foo.c and declared in header file foo.h. File foo.c is compiled into the object file foo.o. File main.c is compiled and linked with object file foo.o to produce the executable program prog. The bottom part of Figure 2 shows the corresponding dependency graph produced by Frappé. The nodes of this graph are the executable program prog, object file foo.o, source files main.c, foo.h and foo.c, function main and bar, formal parameters argv, argc and input, and their types char and int. Edges on the other hand show relationships between these nodes, e.g. compiled_from, includes, or file_contains. Of interest, note that the edge isa_type from argv to char makes use of the QUALIFIER ** to denote the correct signature for argv. The complete set of node and edge types is shown in Table 1. As well as being typed or labeled, nodes and edges are further characterized by a number of properties outlined in Table 2. These properties are useful in both displaying results to users and in
3 specifying and refining queries. Examples of this are shown in section 4. In general, the properties relate to naming, locations in source code (where applicable), and other statically determinable information such as qualifiers and positional information. 4. USE CASES By using the graph model outlined in Section 3, Frappé is able to support a variety of use cases. In terms of query processing, these range from simple index lookups to reachability queries, transitive closures, and complex pattern matching. 4.1 Code Search Whether finding a macro mentioned in a bug report or a vaguely remembered utility function, code search provides a quick entry point to exploring the source code related to a particular task. In large codebases, querying on name alone can produce an unmanageable number of results, so it is important to take advantage of the fact that users typically know the type of the entity they are looking for (e.g. a struct as opposed to a macro) and have some idea of where, at a high level, the entity is defined (e.g. in a particular directory or module). Searching for name, type and location with the graph model requires an index of symbol names with wildcard or fuzzy matching support, and the ability to filter on TYPE and node properties in general, but also on the surrounding graph structure. For example, to return only fields named 'id' present in the module wakeup.elf, any fields that are not reachable from the node representing that module via a sequence of file_contains, compiled_from and linked_from edges can be eliminated. START m=node:node_auto_index('short_name: wakeup.elf') MATCH m -[:compiled_from linked_from*]-> f WITH distinct f MATCH f -[:file_contains]-> (n:field{short_name: 'id'}) Figure 3. Symbol search constrained by module Figure 3 shows how this is achieved in Neo4j s Cypher language. The first MATCH statement matches all modules (nodes) m to all files f that are in the transitive closure (*) of outgoing edges (-[ ]->) with type compiled_from or linked_from. The WITH statement only carries forward the set of distinct files in that set. The second MATCH statement returns all fields with a SHORT_NAME property of 'id' that have an incoming file_contains edge from a file in that set. 4.2 Cross Referencing and Code Navigation Once a developer has a given source file open in their editor, moving within and between files quickly becomes crucial to their productivity. This facility is achieved through two core actions. The first, go-to-definition, involves automatically navigating the user from the symbol reference currently under their cursor in a given source file directly to its correct underlying definition. This is useful for quickly understanding and verifying the type of a variable or the behavior of a particular function. The second core action, find-references, involves finding all locations where the symbol under the cursor is referenced, and letting users inspect each in turn. This helps in understanding how a given function, type or macro is used when writing new code, and to ensure all references to a symbol are consistent after it has been refactored. Nodes: x.type directory enum_def enumerator field file function function_decl Edges: type(x) calls casts_to compiled_from contains declares dereferences dereferences_member dir_contains expands_macro file_contains gets_align_of gets_size_of has_local has_param has_param_type function_type global global_decl local macro module parameter primitive static_local struct struct_decl typedef union union_decl has_ret_type includes interrogates_macro isa_type link_declares link_matches linked_from linked_from_lib reads reads_member takes_address_of takes_address_of_member uses_enumerator writes writes_member Node property Description TYPE The node s type. See Table 1. SHORT_NAME The file name. The symbol name, e.g. main. The file name. NAME The symbol name including its parent, e.g. message::id The file path. LONG_NAME The fully qualified symbol name, e.g. message::get_id(int). For nodes with TYPE enumerator only. The VALUE enumerator s integer value VARIADIC For nodes with TYPE function only. VIRTUAL Present if the function is variadic or virtual. IN_MACRO Present if the node results from a macro expansion. Edge property Description USE_FILE_ID USE_START_LINE The source range of the expression USE_START_COL corresponding to the edge (e.g. the complete USE_END_LINE call site for a calls edge) or of the macro USE_END_COL expansion that produces it. NAME_FILE_ID NAME_START_LINE NAME_START_COL NAME_END_LINE NAME_END_COL ARRAY_LENGTHS BIT_WIDTH QUALIFIERS INDEX LINK_ORDER Table 1. Node and edge types Table 2. Node and edge properties The source range of the representative token corresponding to the edge (e.g. the function name token for a calls edge) For edges labeled is_a only. These properties further describe the nature of the type use: the constant dimension sizes of declared arrays, the bit widths of fields and the coded string of type qualifiers in spoken order. ] for array, * for pointer, c for const, v for volatile, and r for restrict For edges of type has_param and has_param_type only. The parameter position. For edges of type linked_from only. The link order.
4 START n=node:node_auto_index('short_name: id') WHERE (n) <-[{NAME_FILE_ID: , NAME_START_LINE: 104, NAME_START_COLUMN: 16}]- () Figure 4. Go to definition The go-to-definition action can be thought of as a code search of the symbol name under the cursor, where the results are not constrained by the location of their definitions, but instead by the location of their references. In terms of the graph model, this amounts to eliminating results that do not have an incoming edge (<-[]-) whose NAME* file and source range properties match the position in the file of the start of the symbol under the cursor, as in Figure 4. The find-references action can then be thought of as simply listing the incoming edges of the result of the go-todefinition query. 4.3 Debugging While being able to quickly jump to the definition or references of a symbol is useful in itself, these navigation tasks are often used to manually explore code paths as part of a broader goal that can be addressed more directly. Debugging is one such use case. If a global variable, for example, is known to be in an invalid state at a certain point in the program, the ultimate goal of the user is to find where that invalid value is coming from. Inspecting all the places where the global variable is written via find-references can be time-consuming and unproductive in large code bases, as a large number of results are irrelevant. Instead only the writes that happen before the point in the program that is known to have an invalid state need to be considered. If, at an earlier point in the program execution, the state is know to be valid, that fact can also be used to further bound the writes that need to be considered. START from=node:node_auto_index('short_name: sr_media_change'), to=node:node_auto_index('short_name: get_sectorsize'), b=node:node_auto_index('short_name: packet_command') MATCH writer -[write:writes_member]-> ({SHORT_NAME:'cmd'}) <-[:contains]- b WITH to, from, writer, write MATCH direct <-[s:calls]- from -[r:calls{use_start_line: 236}]-> to WHERE r.use_start_line >= s.use_start_line AND direct -[:calls*]-> writer RETURN distinct writer, write.use_start_line Figure 5. Paths where field cmd is written While the relative control flow ordering of edges (i.e. the fact that one call, read, write, etc. happens before or after another) is not currently captured in the graph model, Figure 5 shows an example query using a comparison of the USE_START_LINE property as an approximation. In this example the value stored in the field 'cmd' is known to be correct at the beginning of the function 'sr_media_change' and invalid on entering the function 'get_sectorsize'. 4.4 Code Comprehension One of the more difficult aspects of working on a large codebase is understanding how the various parts of the system fit together and affect one another in a general sense. Program slicing is a well-known technique used for this purpose [12]. Given a seed statement or region in the code that is of interest, a forward program slice is the subset of statements in the whole program that are transitively impacted by the seed region. A backward slice, on the other hand is the subset of statements that the seed region depends on. One of the simplest approximations of a program slice is the transitive closure of the call graph i.e. the calls edges of the graph model. A backward slice is then the transitive closure of outgoing calls edges from a seed function, and represents all functions that, if modified, could alter the behavior of that function. Similarly, a forward slice is the transitive closure of incoming calls edges and represents all code that may be affected if the seed function is changed. An example backward slice query is shown in Figure 6. The same idea can be applied to other edge types too, such as file includes, or to macro expansions to see all code potentially affected by the seed macro. START n=node:node_auto_index('short_name: pci_read_bases') MATCH n -[:calls*]-> m RETURN distinct m Figure 6. Transitive closure of outgoing calls Beyond transitive closures, shortest path queries are also useful in understanding how the parts of a codebase fit together. When commencing a new task in an unfamiliar region of the codebase, it is useful to understand how execution might reach it from a common entry point, or a more familiar part of the code. 5. Querying the Linux Kernel To illustrate the performance and memory requirements of the above use cases on an open codebase, we ran Frappé over Oracle s Unbreakable Enterprise Kernel version , with 11.4 million lines of code. 5.1 Graph characteristics We extracted all nodes and edges described in Table 1, resulting in just over half a million nodes and close to four million edges, for a ratio of 1:8 (see Table 3). All nodes, properties, relationships and indexes were stored in a Neo4j database of close to 800MB. See Table 4 for the complete breakdown. Table 3. Graph metrics Node count Edge count Graph Density Table 4. Database size (MB) Properties Nodes Relationships Indexes Total Figure 7 shows the node degree (both in and out) distribution using a log scale for the count of nodes with each degree. As can be seen, a large majority of nodes have a small node degree, whereas a few nodes have a huge degree. These latter nodes are normally primitives and other commonly used types (e.g. int with degree 79K) as well as common constants (e.g. NULL with degree 19K) that are referenced in many places throughout the codebase.
5 Node%count%(log%scale)% 1" 1" 101" 201" 301" 401" 501" 615" 752" 985" 1290" 1660" 1865" 4308" 5.2 Use Case Benchmarks with Neo4j Each of the example queries from Section 4 above were run over the Linux kernel graph with Neo4j community edition. Each query was run ten times with a cold cache and ten times with a warm cache on a server with 8x Intel Xeon CPU E GHz, with 128GB RAM. The Java virtual machine s maximum heap was set to 2GB. Table 5 reports the performance of the queries with both a cold and warm cache. As can be seen, both cold performances for the simpler queries (code search and cross-referencing) take ~3 seconds, while warm performances bring that number down to ~100ms more than adequate for those types of queries. The debugging example takes a little longer, at 3 to 4 seconds, down to ~300ms with a warm cache, while the code comprehension example does not terminate within 15 minutes. Code search Fig.3 X-referencing Fig. 4 Debugging Fig. 5 Comprehension Fig. 6 Node%degree%(in2out)% Figure 7. Linux kernel node degree distribution Table 5. Query performance Time cold / warm (ms) Min Avg Max Result Count 2567 / / / / / / / / / > 15 mins, aborted EXPERIENCES AND CHALLENGES In this section we outline the challenging aspects of implementing the prototype version of Frappé on top of the Neo4j graph database as well as some of the unresolved issues in making Frappé practical for real-world deployment in projects with codebases in the order of 10s of millions of lines of code. From the succinct example queries in Section 4 and the results of the benchmarks in Section 5.2, it is clear that Neo4j and Cypher is a promising solution for a subset of source code querying use cases. To be a suitable all-round solution, however, there are a number of remaining issues. 6.1 Query Performance While Neo4j was able to process the queries for code search and code navigation with sub-second response times in large graphs sufficiently fast for those tasks (also achievable with relational models) performance was mixed in the remaining use cases (that a graph model is better suited to). This was largely due to 1 Computed via Neo4j s Java API in ~20ms " 10000" 1000" 100" 10" suboptimal graph explorations being chosen by the Cypher query language. For example, while the transitive closure is expressible in Cypher, its associated runtime is unreasonable. We instead implemented transitive closure ourselves by traversing the graph directly via Neo4j s Java embedded mode (bypassing Cypher) to achieve sub-second performance. For the debugging use case, however, providing a specialized implementation is not possible as a workaround for performance. In that case a general pattern matching query language is needed, as the shape of the pattern to be matched when debugging varies from one bug to the next bug. 6.2 Graph Model and Query Language Modeling the disparate information from the preprocessor, abstract syntax tree, directory structure and linker as a connected property graph was relatively straightforward. The current model is able to support a range of use cases that are reasonably succinct to specify in Cypher. Further improvements can be made by making use of some of the new features in Neo4j 2, specifically, the new node label feature. Labels would allow nodes to have both their underlying type (e.g. function, struct, union) as well as a grouped type (e.g. symbol, type, container). For example, the first Cypher query of Table 6, querying for all nodes that are both containers and symbols with a particular name "foo" becomes the second Cypher query in the table. Cypher 1.X Cypher 2.X Table 6. Newer Cypher syntax START n:node=node_auto_index(" (TYPE: struct TYPE: union TYPE: enum... <and so on>) AND NAME: foo") MATCH (n:container:symbol{name: "foo"}) Edges may also be grouped in a similar manner (e.g. link, preprocessor, containment, etc), but unfortunately Neo4J does not extend its label support to edges. The main issue with the current model is its representation of symbol references as edges. The source file where a reference occurs in the code is not necessarily the same as that of either end node, due to the C preprocessor, so the edge and source file need to be associated directly. As Neo4j does not support hyper edges, however, the NAME_FILE_ID and USE_FILE_ID properties are used to create the association instead. This makes matching all the references (e.g. calls, writes, reads, etc.) within a file much clumsier than it could be. One workaround for a lack of hyper edge support is to instead model references as nodes. For example, foo -[:calls]-> bar, where an edge property associates the containing file, would become foo -[:calls]-> callsite -[:calls]-> bar and file -[:contains]-> callsite. With this option, specifying a match for the references associated with a particular file improves, but specifying matches in general becomes at best less succinct and at worst impossible: while Cypher supports matching repeating edges (via *), repeating patterns of edges and nodes are unable to be expressed. A possible solution to this is by adding the original edge as a shortcut as well, i.e. foo - [:calls]-> bar, but this would still not allow repeating edges to be filtered by their location. 6.3 Evolving Codebases as Temporal Graphs In many circumstances, supporting the use cases mentioned above for the latest snapshot of a codebase is an improvement on what is currently possible for large C/C++ systems. In reality, however, developers are rarely making changes based on the latest
6 snapshot. Instead, they are working on top of versions that are days, months or years old, depending on the scope of the bug fix, feature addition, or backport being implemented. As such, Frappé would ideally support queries against all these versions of the code. One of the simplest ways to achieve this is to include the graph data store Frappé generates within the version control system alongside the source code it was derived from. This ensures that whenever the source code is checked out the correct version of the graph store is locally available. The large size of the graph store (~1GB for the Unbreakable Enterprise Kernel and an order of magnitude bigger for other large systems) reduces the appeal of this approach. Its use blows out the size of the version control repository and, with the 100s to 1000s of developers that work on these larger systems, creates significant network traffic. Another option is to maintain the graph data for all versions centrally. The simplest approach is to store and query each version in isolation by routing traffic based on a specified version. This has two main drawbacks, however. Firstly, as large codebases evolve slowly, most of the graph data extracted remains the same from one version to the next, so increasing numbers of duplicate nodes, edges and properties are being needlessly stored over time. Secondly, it fails to take advantage of the potential to query across versions. This is particularly useful in the software engineering domain, where understanding what has changed between versions and the wider effects of those changes is a common and difficult task in large codebases, known as software change impact analysis [9]. While it is possible to incorporate this temporal aspect into the graph model, doing so makes querying much clumsier and at times impossible, as noted in Section 6.2. A more comprehensive solution is needed to efficiently store and query the delta of the program data that has evolved in a given large codebase. 7. RELATED WORK For each of the challenges covered in the previous section query performance, the property graph model and query language, and support for evolving graphs there is a wide range of related work focused on each issue individually, but no single solution that addresses them all. As discussed, while Neo4j [4] provides an increasingly userfriendly graph query language [5] for pattern matching, its performance, although adequate for many use cases, is far from ideal on more complex queries and it lacks support for efficiently storing and querying evolving graphs. PGX [10] and a number of other approaches are making significant improvements in the area of pattern matching performance. LLAMA [11] and others are focused on efficient storage of evolving graphs with minimal performance impact. 8. SUMMARY Frappé is a source code querying system, and as such, it is composed of four elements: an extractor to pull in the desired system model from the source code, a repository to store that data, an interface that allows the user to specify queries and view their results, and a query processor that handles queries by examining the repository. In this paper we explain our experiences with the use of the Neo4j graph database and its Cypher query language as the repository, and query processor in our source code querying system. We give an overview of the graph model used by Frappé for C source code, and provide developer use cases for querying systems. The experimental data over the Unbreakable Enterprise Kernel show that complex queries lack in performance, whereas manual implementation of those queries can give results in a sub-second timeframe. We show the challenges for graph databases, their graph model, and query languages as they relate to source code querying systems used with evolving large codebases of millions of lines of code. Our experience shows that in this area, several open challenges remain for the graph database community. 9. ACKNOWLEDGMENTS We would like to thank Matthew Johnson and Edward Evans for their help in collecting data for this paper. 10. REFERENCES [1] G. Robles, J.J. Amor, J.M. Gonzalez-Barahona, and I. Herraiz Evolution and growth in large libre software projects. In Principles of Software Evolution, Eighth International Workshop on DOI: [2] Timothy C Lethbridge and Nicolas Anquetil Architecture of a source code exploration tool: A software engineering case study. TR-97-07, School of Information Technology and Engineering, University of Ottawa (1997). [3] Nathan Hawes and Ben Barham Frappé: Using Clang to query and visualize large codebases. (October 2014). [4] Neo Technology Get Started. (March 2015). [5] Neo Technology Intro to Cypher. (March 2015). [6] C. Cifuentes, N. Keynes, Lian Li, N. Hawes, and M. Valdiviezo Transitioning Parfait into a Development Tool. Security Privacy, IEEE 10, 3 (May 2012), DOI: [7] Nathan Hawes Code Maps: A scalable visualisation technique for large codebases. In 22nd Australasian Software Engineering Conference: ASWEC Engineers Australia, dn= ; res=ieleng [8] Raoul-Gabriel Urma and Alan Mycroft Source-code queries with graph databases with application to programming language usage and evolution. Science of Computer Programming 97 (2015), [9] Robert S. Arnold and Shawn A. Bohner Software Change Impact Analysis. IEEE Computer Society Press, Los Alamitos, CA, USA. [10] Raghavan Raman, Oskar van Rest, Sungpack Hong, Zhe Wu, Hassan Chafi, and Jay Banerjee PGX.ISO: Parallel and Efficient In-Memory Engine for Subgraph Isomorphism. In Proceedings of Workshop on GRAph Data Management Experiences and Systems (GRADES 14). ACM, New York, NY, USA, Article 5, 6 pages. DOI: [11] D. Margo P. Macko, V. Marathe and M. Seltzer LLAMA: Efficient Graph Analytics Using Large Multiversioned Arrays. Ph.D. Dissertation. Harvard University. [12] M. Weiser Program Slicing. IEEE Transactions on Software Engineering SE-10, 4 (July 1984),
Using Clang to Visualize Large Codebases. Nathan Hawes and Ben Barham Oracle Labs Australia October 2014
Using Clang to Visualize Large Codebases Nathan Hawes and Ben Barham Oracle Labs Australia October 2014 Safe Harbour The following is intended to provide some insight into a line of research in Oracle
Component visualization methods for large legacy software in C/C++
Annales Mathematicae et Informaticae 44 (2015) pp. 23 33 http://ami.ektf.hu Component visualization methods for large legacy software in C/C++ Máté Cserép a, Dániel Krupp b a Eötvös Loránd University [email protected]
Microblogging Queries on Graph Databases: An Introspection
Microblogging Queries on Graph Databases: An Introspection ABSTRACT Oshini Goonetilleke RMIT University, Australia [email protected] Timos Sellis RMIT University, Australia [email protected]
CiteSeer x in the Cloud
Published in the 2nd USENIX Workshop on Hot Topics in Cloud Computing 2010 CiteSeer x in the Cloud Pradeep B. Teregowda Pennsylvania State University C. Lee Giles Pennsylvania State University Bhuvan Urgaonkar
IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Hyper-V Server Agent Version 6.3.1 Fix Pack 2.
IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Hyper-V Server Agent Version 6.3.1 Fix Pack 2 Reference IBM Tivoli Composite Application Manager for Microsoft Applications:
LDIF - Linked Data Integration Framework
LDIF - Linked Data Integration Framework Andreas Schultz 1, Andrea Matteini 2, Robert Isele 1, Christian Bizer 1, and Christian Becker 2 1. Web-based Systems Group, Freie Universität Berlin, Germany [email protected],
InfiniteGraph: The Distributed Graph Database
A Performance and Distributed Performance Benchmark of InfiniteGraph and a Leading Open Source Graph Database Using Synthetic Data Objectivity, Inc. 640 West California Ave. Suite 240 Sunnyvale, CA 94086
Graph Database Proof of Concept Report
Objectivity, Inc. Graph Database Proof of Concept Report Managing The Internet of Things Table of Contents Executive Summary 3 Background 3 Proof of Concept 4 Dataset 4 Process 4 Query Catalog 4 Environment
In-memory databases and innovations in Business Intelligence
Database Systems Journal vol. VI, no. 1/2015 59 In-memory databases and innovations in Business Intelligence Ruxandra BĂBEANU, Marian CIOBANU University of Economic Studies, Bucharest, Romania [email protected],
Binary search tree with SIMD bandwidth optimization using SSE
Binary search tree with SIMD bandwidth optimization using SSE Bowen Zhang, Xinwei Li 1.ABSTRACT In-memory tree structured index search is a fundamental database operation. Modern processors provide tremendous
IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Internet Information Services Agent Version 6.3.1 Fix Pack 2.
IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Internet Information Services Agent Version 6.3.1 Fix Pack 2 Reference IBM Tivoli Composite Application Manager for Microsoft
Combining Static and Dynamic Impact Analysis for Large-scale Enterprise Systems
Combining Static and Dynamic Impact Analysis for Large-scale Enterprise Systems The 15th International Conference on Product-Focused Software Process Improvement, Helsinki, Finland. Wen Chen, Alan Wassyng,
Unified XML/relational storage March 2005. The IBM approach to unified XML/relational databases
March 2005 The IBM approach to unified XML/relational databases Page 2 Contents 2 What is native XML storage? 3 What options are available today? 3 Shred 5 CLOB 5 BLOB (pseudo native) 6 True native 7 The
Visualizing Information Flow through C Programs
Visualizing Information Flow through C Programs Joe Hurd, Aaron Tomb and David Burke Galois, Inc. {joe,atomb,davidb}@galois.com Systems Software Verification Workshop 7 October 2010 Joe Hurd, Aaron Tomb
An Eclipse Plug-In for Visualizing Java Code Dependencies on Relational Databases
An Eclipse Plug-In for Visualizing Java Code Dependencies on Relational Databases Paul L. Bergstein, Priyanka Gariba, Vaibhavi Pisolkar, and Sheetal Subbanwad Dept. of Computer and Information Science,
CSCI E 98: Managed Environments for the Execution of Programs
CSCI E 98: Managed Environments for the Execution of Programs Draft Syllabus Instructor Phil McGachey, PhD Class Time: Mondays beginning Sept. 8, 5:30-7:30 pm Location: 1 Story Street, Room 304. Office
Keywords: Regression testing, database applications, and impact analysis. Abstract. 1 Introduction
Regression Testing of Database Applications Bassel Daou, Ramzi A. Haraty, Nash at Mansour Lebanese American University P.O. Box 13-5053 Beirut, Lebanon Email: rharaty, [email protected] Keywords: Regression
Processing and data collection of program structures in open source repositories
1 Processing and data collection of program structures in open source repositories JEAN PETRIĆ, TIHANA GALINAC GRBAC AND MARIO DUBRAVAC, University of Rijeka Software structure analysis with help of network
Parallelization: Binary Tree Traversal
By Aaron Weeden and Patrick Royal Shodor Education Foundation, Inc. August 2012 Introduction: According to Moore s law, the number of transistors on a computer chip doubles roughly every two years. First
Interpreters and virtual machines. Interpreters. Interpreters. Why interpreters? Tree-based interpreters. Text-based interpreters
Interpreters and virtual machines Michel Schinz 2007 03 23 Interpreters Interpreters Why interpreters? An interpreter is a program that executes another program, represented as some kind of data-structure.
Toad for Oracle 8.6 SQL Tuning
Quick User Guide for Toad for Oracle 8.6 SQL Tuning SQL Tuning Version 6.1.1 SQL Tuning definitively solves SQL bottlenecks through a unique methodology that scans code, without executing programs, to
A QUICK OVERVIEW OF THE OMNeT++ IDE
Introduction A QUICK OVERVIEW OF THE OMNeT++ IDE The OMNeT++ 4.x Integrated Development Environment is based on the Eclipse platform, and extends it with new editors, views, wizards, and additional functionality.
From Faust to Web Audio: Compiling Faust to JavaScript using Emscripten
From Faust to Web Audio: Compiling Faust to JavaScript using Emscripten Myles Borins Center For Computer Research in Music and Acoustics Stanford University Stanford, California United States, [email protected]
Ching-Yung Lin, Ph.D. Adjunct Professor, Dept. of Electrical Engineering and Computer Science IBM Chief Scientist, Graph Computing. October 29th, 2015
E6893 Big Data Analytics Lecture 8: Spark Streams and Graph Computing (I) Ching-Yung Lin, Ph.D. Adjunct Professor, Dept. of Electrical Engineering and Computer Science IBM Chief Scientist, Graph Computing
A Comparative Study on Vega-HTTP & Popular Open-source Web-servers
A Comparative Study on Vega-HTTP & Popular Open-source Web-servers Happiest People. Happiest Customers Contents Abstract... 3 Introduction... 3 Performance Comparison... 4 Architecture... 5 Diagram...
How To Install An Aneka Cloud On A Windows 7 Computer (For Free)
MANJRASOFT PTY LTD Aneka 3.0 Manjrasoft 5/13/2013 This document describes in detail the steps involved in installing and configuring an Aneka Cloud. It covers the prerequisites for the installation, the
What Is Specific in Load Testing?
What Is Specific in Load Testing? Testing of multi-user applications under realistic and stress loads is really the only way to ensure appropriate performance and reliability in production. Load testing
System Requirements - Table of Contents
Page 1 of 12 System Requirements - Table of Contents CommNet Server CommNet Agent CommNet Browser CommNet Browser as a Stand-Alone Application CommNet Browser as a Remote Web-Based Application CommNet
Tableau Server Scalability Explained
Tableau Server Scalability Explained Author: Neelesh Kamkolkar Tableau Software July 2013 p2 Executive Summary In March 2013, we ran scalability tests to understand the scalability of Tableau 8.0. We wanted
Database Application Developer Tools Using Static Analysis and Dynamic Profiling
Database Application Developer Tools Using Static Analysis and Dynamic Profiling Surajit Chaudhuri, Vivek Narasayya, Manoj Syamala Microsoft Research {surajitc,viveknar,manojsy}@microsoft.com Abstract
Delivering Quality in Software Performance and Scalability Testing
Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,
Technical paper review. Program visualization and explanation for novice C programmers by Matthew Heinsen Egan and Chris McDonald.
Technical paper review Program visualization and explanation for novice C programmers by Matthew Heinsen Egan and Chris McDonald Garvit Pahal Indian Institute of Technology, Kanpur October 28, 2014 Garvit
How To Test For Performance And Scalability On A Server With A Multi-Core Computer (For A Large Server)
Scalability Results Select the right hardware configuration for your organization to optimize performance Table of Contents Introduction... 1 Scalability... 2 Definition... 2 CPU and Memory Usage... 2
On demand synchronization and load distribution for database grid-based Web applications
Data & Knowledge Engineering 51 (24) 295 323 www.elsevier.com/locate/datak On demand synchronization and load distribution for database grid-based Web applications Wen-Syan Li *,1, Kemal Altintas, Murat
PERFORMANCE ENHANCEMENTS IN TreeAge Pro 2014 R1.0
PERFORMANCE ENHANCEMENTS IN TreeAge Pro 2014 R1.0 15 th January 2014 Al Chrosny Director, Software Engineering TreeAge Software, Inc. [email protected] Andrew Munzer Director, Training and Customer
2010-2011 Assessment for Master s Degree Program Fall 2010 - Spring 2011 Computer Science Dept. Texas A&M University - Commerce
2010-2011 Assessment for Master s Degree Program Fall 2010 - Spring 2011 Computer Science Dept. Texas A&M University - Commerce Program Objective #1 (PO1):Students will be able to demonstrate a broad knowledge
Three Effective Top-Down Clustering Algorithms for Location Database Systems
Three Effective Top-Down Clustering Algorithms for Location Database Systems Kwang-Jo Lee and Sung-Bong Yang Department of Computer Science, Yonsei University, Seoul, Republic of Korea {kjlee5435, yang}@cs.yonsei.ac.kr
HPSA Agent Characterization
HPSA Agent Characterization Product HP Server Automation (SA) Functional Area Managed Server Agent Release 9.0 Page 1 HPSA Agent Characterization Quick Links High-Level Agent Characterization Summary...
POOSL IDE User Manual
Embedded Systems Innovation by TNO POOSL IDE User Manual Tool version 3.0.0 25-8-2014 1 POOSL IDE User Manual 1 Installation... 5 1.1 Minimal system requirements... 5 1.2 Installing Eclipse... 5 1.3 Installing
Xcode Project Management Guide. (Legacy)
Xcode Project Management Guide (Legacy) Contents Introduction 10 Organization of This Document 10 See Also 11 Part I: Project Organization 12 Overview of an Xcode Project 13 Components of an Xcode Project
TEACHING COMPUTER PROGRAMMING WITH PROGRAM ANIMATION
TEACHING COMPUTER PROGRAMMING WITH PROGRAM ANIMATION Theodore S. Norvell and Michael P. Bruce-Lockhart Electrical and Computer Engineering Faculty of Engineering and Applied Science Memorial University
QuickDB Yet YetAnother Database Management System?
QuickDB Yet YetAnother Database Management System? Radim Bača, Peter Chovanec, Michal Krátký, and Petr Lukáš Radim Bača, Peter Chovanec, Michal Krátký, and Petr Lukáš Department of Computer Science, FEECS,
SCADE System 17.0. Technical Data Sheet. System Requirements Analysis. Technical Data Sheet SCADE System 17.0 1
SCADE System 17.0 SCADE System is the product line of the ANSYS Embedded software family of products and solutions that empowers users with a systems design environment for use on systems with high dependability
MODEL DRIVEN DEVELOPMENT OF BUSINESS PROCESS MONITORING AND CONTROL SYSTEMS
MODEL DRIVEN DEVELOPMENT OF BUSINESS PROCESS MONITORING AND CONTROL SYSTEMS Tao Yu Department of Computer Science, University of California at Irvine, USA Email: [email protected] Jun-Jang Jeng IBM T.J. Watson
1 External Model Access
1 External Model Access Function List The EMA package contains the following functions. Ema_Init() on page MFA-1-110 Ema_Model_Attr_Add() on page MFA-1-114 Ema_Model_Attr_Get() on page MFA-1-115 Ema_Model_Attr_Nth()
C Programming Review & Productivity Tools
Review & Productivity Tools Giovanni Agosta Piattaforme Software per la Rete Modulo 2 Outline Preliminaries 1 Preliminaries 2 Function Pointers Variadic Functions 3 Build Automation Code Versioning 4 Preliminaries
Java Application Developer Certificate Program Competencies
Java Application Developer Certificate Program Competencies After completing the following units, you will be able to: Basic Programming Logic Explain the steps involved in the program development cycle
Data Store Interface Design and Implementation
WDS'07 Proceedings of Contributed Papers, Part I, 110 115, 2007. ISBN 978-80-7378-023-4 MATFYZPRESS Web Storage Interface J. Tykal Charles University, Faculty of Mathematics and Physics, Prague, Czech
Linux Kernel. Security Report
Linux Kernel Security Report September 25 Authors: Andy Chou, Bryan Fulton and Seth Hallem Coverity has combined two years of analysis work carried out in a commercial setting at Coverity with four years
FileMaker 11. ODBC and JDBC Guide
FileMaker 11 ODBC and JDBC Guide 2004 2010 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker is a trademark of FileMaker, Inc. registered
Coverity White Paper. Effective Management of Static Analysis Vulnerabilities and Defects
Effective Management of Static Analysis Vulnerabilities and Defects Introduction According to a recent industry study, companies are increasingly expanding their development testing efforts to lower their
Measuring the Effect of Code Complexity on Static Analysis Results
Measuring the Effect of Code Complexity on Static Analysis Results James Walden, Adam Messer, and Alex Kuhl Department of Computer Science Northern Kentucky University Highland Heights, KY 41099 Abstract.
Multi-Threading Performance on Commodity Multi-Core Processors
Multi-Threading Performance on Commodity Multi-Core Processors Jie Chen and William Watson III Scientific Computing Group Jefferson Lab 12000 Jefferson Ave. Newport News, VA 23606 Organization Introduction
Mimer SQL Real-Time Edition White Paper
Mimer SQL Real-Time Edition - White Paper 1(5) Mimer SQL Real-Time Edition White Paper - Dag Nyström, Product Manager Mimer SQL Real-Time Edition Mimer SQL Real-Time Edition is a predictable, scalable
ProTrack: A Simple Provenance-tracking Filesystem
ProTrack: A Simple Provenance-tracking Filesystem Somak Das Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology [email protected] Abstract Provenance describes a file
SOA Software: Troubleshooting Guide for Agents
SOA Software: Troubleshooting Guide for Agents SOA Software Troubleshooting Guide for Agents 1.1 October, 2013 Copyright Copyright 2013 SOA Software, Inc. All rights reserved. Trademarks SOA Software,
Multi-level Metadata Management Scheme for Cloud Storage System
, pp.231-240 http://dx.doi.org/10.14257/ijmue.2014.9.1.22 Multi-level Metadata Management Scheme for Cloud Storage System Jin San Kong 1, Min Ja Kim 2, Wan Yeon Lee 3, Chuck Yoo 2 and Young Woong Ko 1
Jet Data Manager 2012 User Guide
Jet Data Manager 2012 User Guide Welcome This documentation provides descriptions of the concepts and features of the Jet Data Manager and how to use with them. With the Jet Data Manager you can transform
Data Management, Analysis Tools, and Analysis Mechanics
Chapter 2 Data Management, Analysis Tools, and Analysis Mechanics This chapter explores different tools and techniques for handling data for research purposes. This chapter assumes that a research problem
CE 504 Computational Hydrology Computational Environments and Tools Fritz R. Fiedler
CE 504 Computational Hydrology Computational Environments and Tools Fritz R. Fiedler 1) Operating systems a) Windows b) Unix and Linux c) Macintosh 2) Data manipulation tools a) Text Editors b) Spreadsheets
Raima Database Manager Version 14.0 In-memory Database Engine
+ Raima Database Manager Version 14.0 In-memory Database Engine By Jeffrey R. Parsons, Senior Engineer January 2016 Abstract Raima Database Manager (RDM) v14.0 contains an all new data storage engine optimized
OPTIMIZATION OF DATABASE STRUCTURE FOR HYDROMETEOROLOGICAL MONITORING SYSTEM
OPTIMIZATION OF DATABASE STRUCTURE FOR HYDROMETEOROLOGICAL MONITORING SYSTEM Ph.D. Robert SZCZEPANEK Cracow University of Technology Institute of Water Engineering and Water Management ul.warszawska 24,
How to Design and Create Your Own Custom Ext Rep
Combinatorial Block Designs 2009-04-15 Outline Project Intro External Representation Design Database System Deployment System Overview Conclusions 1. Since the project is a specific application in Combinatorial
International Journal of Software Engineering and Knowledge Engineering c World Scientific Publishing Company
International Journal of Software Engineering and Knowledge Engineering c World Scientific Publishing Company Rapid Construction of Software Comprehension Tools WELF LÖWE Software Technology Group, MSI,
Configuring Firewalls An XML-based Approach to Modelling and Implementing Firewall Configurations
Configuring Firewalls An XML-based Approach to Modelling and Implementing Firewall Configurations Simon R. Chudley and Ulrich Ultes-Nitsche Department of Electronics and Computer Science, University of
ACCELERATING SELECT WHERE AND SELECT JOIN QUERIES ON A GPU
Computer Science 14 (2) 2013 http://dx.doi.org/10.7494/csci.2013.14.2.243 Marcin Pietroń Pawe l Russek Kazimierz Wiatr ACCELERATING SELECT WHERE AND SELECT JOIN QUERIES ON A GPU Abstract This paper presents
1/20/2016 INTRODUCTION
INTRODUCTION 1 Programming languages have common concepts that are seen in all languages This course will discuss and illustrate these common concepts: Syntax Names Types Semantics Memory Management We
Using EDA Databases: Milkyway & OpenAccess
Using EDA Databases: Milkyway & OpenAccess Enabling and Using Scripting Languages with Milkyway and OpenAccess Don Amundson Khosro Khakzadi 2006 LSI Logic Corporation 1 Outline History Choice Of Scripting
AQA GCSE in Computer Science Computer Science Microsoft IT Academy Mapping
AQA GCSE in Computer Science Computer Science Microsoft IT Academy Mapping 3.1.1 Constants, variables and data types Understand what is mean by terms data and information Be able to describe the difference
Coveo Platform 7.0. Oracle Knowledge Connector Guide
Coveo Platform 7.0 Oracle Knowledge Connector Guide Notice The content in this document represents the current view of Coveo as of the date of publication. Because Coveo continually responds to changing
ABSTRACT 1. INTRODUCTION. Kamil Bajda-Pawlikowski [email protected]
Kamil Bajda-Pawlikowski [email protected] Querying RDF data stored in DBMS: SPARQL to SQL Conversion Yale University technical report #1409 ABSTRACT This paper discusses the design and implementation
RTI Routing Service. Release Notes
RTI Routing Service Release Notes Version 5.0.0 2012 Real-Time Innovations, Inc. All rights reserved. Printed in U.S.A. First printing. August 2012. Trademarks Real-Time Innovations, RTI, and Connext are
Building Applications Using Micro Focus COBOL
Building Applications Using Micro Focus COBOL Abstract If you look through the Micro Focus COBOL documentation, you will see many different executable file types referenced: int, gnt, exe, dll and others.
Getting Started with the Internet Communications Engine
Getting Started with the Internet Communications Engine David Vriezen April 7, 2014 Contents 1 Introduction 2 2 About Ice 2 2.1 Proxies................................. 2 3 Setting Up ICE 2 4 Slices 2
! E6893 Big Data Analytics Lecture 9:! Linked Big Data Graph Computing (I)
! E6893 Big Data Analytics Lecture 9:! Linked Big Data Graph Computing (I) Ching-Yung Lin, Ph.D. Adjunct Professor, Dept. of Electrical Engineering and Computer Science Mgr., Dept. of Network Science and
SQL Server 2008 Performance and Scale
SQL Server 2008 Performance and Scale White Paper Published: February 2008 Updated: July 2008 Summary: Microsoft SQL Server 2008 incorporates the tools and technologies that are necessary to implement
PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE
PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE Sudha M 1, Harish G M 2, Nandan A 3, Usha J 4 1 Department of MCA, R V College of Engineering, Bangalore : 560059, India [email protected] 2 Department
Top 10 Bug-Killing Coding Standard Rules
Top 10 Bug-Killing Coding Standard Rules Michael Barr & Dan Smith Webinar: June 3, 2014 MICHAEL BARR, CTO Electrical Engineer (BSEE/MSEE) Experienced Embedded Software Developer Consultant & Trainer (1999-present)
Tableau Server 7.0 scalability
Tableau Server 7.0 scalability February 2012 p2 Executive summary In January 2012, we performed scalability tests on Tableau Server to help our customers plan for large deployments. We tested three different
<Insert Picture Here> Oracle SQL Developer 3.0: Overview and New Features
1 Oracle SQL Developer 3.0: Overview and New Features Sue Harper Senior Principal Product Manager The following is intended to outline our general product direction. It is intended
The programming language C. sws1 1
The programming language C sws1 1 The programming language C invented by Dennis Ritchie in early 1970s who used it to write the first Hello World program C was used to write UNIX Standardised as K&C (Kernighan
SAS BI Dashboard 4.3. User's Guide. SAS Documentation
SAS BI Dashboard 4.3 User's Guide SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2010. SAS BI Dashboard 4.3: User s Guide. Cary, NC: SAS Institute
Visualizing Software Projects in JavaScript
Visualizing Software Projects in JavaScript Tim Disney Abstract Visualization techniques have been used to help programmers deepen their understanding of large software projects. However the existing visualization
Pattern Insight Clone Detection
Pattern Insight Clone Detection TM The fastest, most effective way to discover all similar code segments What is Clone Detection? Pattern Insight Clone Detection is a powerful pattern discovery technology
VisCG: Creating an Eclipse Call Graph Visualization Plug-in. Kenta Hasui, Undergraduate Student at Vassar College Class of 2015
VisCG: Creating an Eclipse Call Graph Visualization Plug-in Kenta Hasui, Undergraduate Student at Vassar College Class of 2015 Abstract Call graphs are a useful tool for understanding software; however,
ZooKeeper. Table of contents
by Table of contents 1 ZooKeeper: A Distributed Coordination Service for Distributed Applications... 2 1.1 Design Goals...2 1.2 Data model and the hierarchical namespace...3 1.3 Nodes and ephemeral nodes...
A Technical Review of TIBCO Patterns Search
A Technical Review of TIBCO Patterns Search 2 TABLE OF CONTENTS SUMMARY... 3 ARCHITECTURAL OVERVIEW... 3 HOW DOES TIBCO PATTERNS SEARCH WORK?... 5 ELIMINATE THE NEED FOR RULES... 7 LOADING AND SYNCHRONIZING
Siebel Application Deployment Manager Guide. Siebel Innovation Pack 2013 Version 8.1/8.2 September 2013
Siebel Application Deployment Manager Guide Siebel Innovation Pack 2013 Version 8.1/8.2 September 2013 Copyright 2005, 2013 Oracle and/or its affiliates. All rights reserved. This software and related
JetBrains ReSharper 2.0 Overview Introduction ReSharper is undoubtedly the most intelligent add-in to Visual Studio.NET 2003 and 2005. It greatly increases the productivity of C# and ASP.NET developers,
Index Terms Domain name, Firewall, Packet, Phishing, URL.
BDD for Implementation of Packet Filter Firewall and Detecting Phishing Websites Naresh Shende Vidyalankar Institute of Technology Prof. S. K. Shinde Lokmanya Tilak College of Engineering Abstract Packet
Technical. Overview. ~ a ~ irods version 4.x
Technical Overview ~ a ~ irods version 4.x The integrated Ru e-oriented DATA System irods is open-source, data management software that lets users: access, manage, and share data across any type or number
a division of Technical Overview Xenos Enterprise Server 2.0
Technical Overview Enterprise Server 2.0 Enterprise Server Architecture The Enterprise Server (ES) platform addresses the HVTO business challenges facing today s enterprise. It provides robust, flexible
Reporting Services. White Paper. Published: August 2007 Updated: July 2008
Reporting Services White Paper Published: August 2007 Updated: July 2008 Summary: Microsoft SQL Server 2008 Reporting Services provides a complete server-based platform that is designed to support a wide
Oracle Database 10g: Building GIS Applications Using the Oracle Spatial Network Data Model. An Oracle Technical White Paper May 2005
Oracle Database 10g: Building GIS Applications Using the Oracle Spatial Network Data Model An Oracle Technical White Paper May 2005 Building GIS Applications Using the Oracle Spatial Network Data Model
