Compressing Medical Records for Storage on a Low-End Mobile Phone

Size: px
Start display at page:

Download "Compressing Medical Records for Storage on a Low-End Mobile Phone"

Transcription

1 Honours Project Report Compressing Medical Records for Storage on a Low-End Mobile Phone Paul Brittan [email protected] Supervised By: Sonia Berman, Gary Marsden & Anne Kayem Category Min Max Chosen 1 Requirement Analysis and Design Theoretical Analysis Experiment Design and Execution System Development and Implementation Results, Findings and Conclusion Aim Formulation and Background Work Quality of Report Writing and Presentation Adherence to Project Proposal and Quality of Deliverables Overall General Project Evaluation Total marks: Department of Computer Science University of Cape Town 2011 ~ i ~

2 Abstract In the rural and developing parts of Africa patients are often charged with keeping their own personal medical records due to the number of different doctors they need to visit. A solution was proposed where these patients could keep their medical records safe and secure on their mobile phone. This project aims to implement a system, where a patient can obtain their electronic medical record from a medical practitioner on their mobile phone. Once on the mobile phone the record is then compressed and encrypted for storage. This report goes into the details of the project that focuses on the compression and storage of the medical records; looking at the performance of different lossless data compression algorithms and storage options on a mobile device. This report looks into finding the most efficient way to compress the records giving the limited resources on the mobile phone. By implementing LZ77 and DEFLATE compression algorithms in an application on an android phone we were able to test each algorithms performance. This was done by monitoring how much of the mobile phone s resources (such as CPU and RAM) were used during execution. From the results of the performance tests it was clear that the DEFLATE algorithm, a fully optimized java import class, was the most efficient in compressing the medical records and thus will be used in the final implementation of the project. Acknowledgments I would like to thank my supervisors for all their input and guidance during the course of the project, giving special thanks to Sonia Berman for all the time she gave and the last minute feedback as well as to Gary Marsden who always had time to meet with me and point me in the right direction. Thanks to my friends and family who supported and encouraged me when I was feeling overwhelmed. Lastly, I would like to give a big thank you to my girlfriend, Sarah Palser, for believing in me and for all her encouragement. If we knew what it was we were doing, it would not be called research, would it? (Albert Einstein) ~ ii ~

3 Contents Abstract... ii Acknowledgments... ii List of Figures:... v Chapter Introduction: Research questions: Performance comparisons of compression algorithms Comparison of different storage techniques for the medical record Testing and Evaluation System overview The Key Success Factors Outline... 3 Chapter Background Electronic Medical Records Existing EMR systems in rural areas Organisational and user issues Data security and confidentiality Lossless Compression algorithms Lempel-Ziv Lempel-Ziv-Welch Prediction with Partial Match Burrows-Wheeler Transform Burrows-Wheeler Transform Evaluation of Algorithms... 8 Chapter Design Design Aims Design Constraints Design Process Medical Data System Architecture System Interface ~ iii ~

4 3.2.4 Algorithms Storage options Software and Hardware needed for implementation Design Summary Chapter Implementation: Main LZ LZW DEFLATE Performance Implementation summary Chapter Testing & Evaluation: Introduction Test Methodology Independent and dependant variables Testing Design Results CPU Usage Memory Usage Compression Ratios Evaluation of results Chapter Conclusion: Future Work References Appendices Appendix A Appendix B ~ iv ~

5 List of Figures Figure 1 - Diagram of separate components that were implemented by each group member... 2 Figure 2 - Example of LZW Compression... 7 Figure 3 - Results from Compression Ratio Test... 9 Figure 4 - Results from Static Memory Test... 9 Figure 5 - Results from Completion Time (Web) Tests... 9 Figure 6 - Results from Completion Time (Text) Tests... 9 Figure 7 - Application s Start Screen Figure 8 - Application s Main Menu Figure 9 - Layout of View Record Screen Figure 10 - Android Platform Distribution Figure 11 - Application s Class Hierarchy Figure 12 - LZ77 Compression Pseudo Code Figure 13 - LZ77 Decompression Pseudo Code Figure 14 - LZW Compression Pseudo Code Figure 15 - LZW Decompression Pseudo Code Figure 16 - CPU usage for LZ77 when compressing 1000kB file Figure 17 - CPU usage for DEFLATE when compressing 1000kB file Figure 18 - CPU usage for LZ77 when decompressing 1000kB file Figure 19 - CPU usage for DEFLATE when decompressing 1000kB file Figure 20 - Average Memory Usage for LZ77 compression Figure 21 - Average Memory Usage for DEFLATE compression Figure 22 - Average Memory Usage for LZ77 decompression Figure 23 - Average Memory Usage for DEFLATE decompression Figure 24 - Compression Ratios achieved with LZ Figure 25 - Compression Ratios achieved with DEFLATE Figure 26 - CPU usage for LZ77 when compressing 500kB file Figure 27 - CPU usage for LZ77 when decompressing 500kB file Figure 28 - CPU usage for LZ77 when compressing 800kB file Figure 29 - CPU usage for LZ77 when decompressing 800kB file Figure 30 - CPU usage for DEFLATE when compressing 500kB file Figure 31 - CPU usage for DEFLATE when decompressing 500kB file Figure 32 - CPU usage for DEFLATE when compressing 800kB file Figure 33 - CPU usage for DEFLATE when decompressing 800kB file ~ v ~

6 Chapter 1 Introduction The advancements in mobile technologies and mobile computing power have caused an increase in the popularity of mobile devices. Mobile devices are now being used for everyday tasks such as communication through s or instant messages and daily scheduling with the help of calendars. With this increase, usage of mobile devices has been broadening through industries such as Healthcare, Insurance and Field Services [1]. To keep up with all the data that needs to be stored on a mobile device or transferred quickly across a network, there needs to be a way to efficiently compress and decompress the data without losing the information. Lossless data compression is a set of data compression algorithms that allows the original data to be reconstructed from compressed data [2]. This project aims to investigate using lossless data compression for the storage of medical records on mobile phones. The original idea for this project was proposed by Simelela, an NGO dealing with rape victims, in which the aim was to aid these victims with revealing certain medical information when reporting their case, by storing their medical records on a mobile phone. Over and above this, patients in developing countries are often responsible for storing and transporting their own paperbased medical records, which can lead to the loss or damage of these documents. Therefore storing these records on a mobile phone will make the process more convenient for both patients and medical practitioners. This project involves designing and implementing an application that the patients can use to store and transfer their medical records between the necessary doctors and hospitals. The application consists of a number of different components such as a graphical user Interface, security in the form of encrypting the medical records and finally compressing the medical records for efficient storage on a mobile device. When developing applications for mobile phones, it is important to be aware of and deal with the restrictions on the capabilities of these devices [11]. In this paper the focus will be on compressing the medical records for storage on a mobile phone. This will be done by implementing various compression algorithms and comparing their performance when executed on a mobile phone. My partner Shelley Petzer [ will be focusing on the security component. She will be investigating a secure transfer medium for getting the records onto the mobile phone and encrypting the medical records once on the phone. 1.1 Research questions There are two key research questions that will be investigated in this document. The first is whether there is a difference in energy consumption while using different compression algorithms on a low end mobile phone. The second is to see if there is an improvement in storing the medical data on the phone s SIM card rather than the mircosd card Performance comparisons of compression algorithms Due to the limitations of a low end mobile phone, it is important to analyse the performance of the compression algorithms. This is firstly to assess whether it is plausible and secondly to find the most suitable choice for compressing a medical record on a mobile phone. This will involve the investigation of several algorithms in order to find an algorithm that reduces the storage space required by the mobile phone, whilst minimising the resources, such as CPU and memory, which it requires to execute. ~ 1 ~

7 1.1.2 Comparison of different storage techniques for the medical record For the medical records to be stored in internal memory, they will need to undergo a series of lossless data compressions. This will be done so that they can be stored within the limited storage space available on mobile phones. These compressions are computationally intensive and may exceed the processing power on standard mobile phones. Therefore research will be conducted to assess whether using internal memory is a feasible option. If internal memory storage is not successful, another approach will be to investigate storing the records on a mircosd card in the mobile phone. With the advances in mobile technology, the available memory on mircosd cards is sufficient for storing the records at a low cost. 1.2 Testing and Evaluation In order to test these hypotheses a system needs to be created that will allow for the most suitable compression algorithm to be found and to test which storage method is the most practical. The system will need to be able to run different compression algorithms and then store the compressed file in the appropriate place. To test the algorithms a series of performance tests were run at the same time as the implemented algorithm. The tests allowed for the monitoring of the dependant variables such as CPU, memory and execution time used by that algorithm while using different independent variables like buffer size, algorithm and file size. The result from these tests will then be analysed and converted into graphs for the easy evaluation of their performance relative to each other, so that a conclusion that answers the research questions can be drawn. 1.3 System overview The prototype will be designed to have two separate components. The first part deals with security and involves securing the transmissions between the medical database server and the mobile phone. This will also involve encrypting and decrypting the data on the mobile phone. This part of the project is highlighted in green in figure 1 and will be completed by my project partner. The second component involves compressing and decompressing the records on the mobile phone and then efficiently storing these records. This part of the project will be covered in this paper and is highlighted in blue in figure 1 below. The user Interface is not in the scope of this project and is left for future work (see section 6.1). The components are designed to be as separate as possible to allow for individual testing of each component and therefore linked only by the data that is passed between them. However, the components can easily be integrated together to form a complete application. Compression/Decompression and Storage Encryption/Decryption Figure 1 - Diagram of separate components that were implemented by each group member ~ 2 ~

8 1.4 The Key Success Factors The success of the project will be judged by the three components. Firstly, the ability to store the medical record on the mobile phone using the best possible medium of storage, whether it be using mircosd cards or the internal memory of a mobile phone. Secondly, to create a prototype that efficiently encrypts and compresses the medical records on the mobile phone within the limited resources available and finally, to secure the transmission between the medical database and the patient s mobile phone that is both efficient and cost effective. 1.5 Outline This document outlines the development and testing of different lossless compression algorithms on a mobile phone. It also contains the necessary sections for analysing the research. Chapter 2 outlines current research on electronic medical records, their advantages and disadvantages as well as giving some examples of electronic medical records in developing countries. It also provides background information on commonly known lossless data compression algorithms and a quick comparison of them. Chapter 3 covers the design plan for the application that was implemented. It gives details into the different aspects of the system and gives justification on the design choices. The different aspects include the medical data that was used to test the system, the system s architecture, the basic system interface and finally the software and hardware needed for implementation. The structure of the application that was implemented and the classes involved will be discussed in Chapter 4. Focusing on how the program works, the methods that were used and the data structures that were implemented in each class. Chapter 5 describes the testing process and how the results were obtained. Starting with the test methodology which explains how the algorithms were tested, the chapter then goes on to show the results and finally evaluating the results to form a conclusion. Finally Chapter 6 contains the overall findings and conclusions of this work as well as suggestions for future work. ~ 3 ~

9 Chapter 2 Background 2.1 Electronic Medical Records The growing use of electronic medical record (EMR) systems in Europe and the United States has been focused on the idea that this can help to improve the quality of health care. Decision support systems (DSS) are becoming important tools in reducing medical errors [12]. has become very important and commonly used in healthcare systems today, and access to medical data like online journals is also increasing. Even in developed nations, the development of EMR systems is still an uncertain and challenging assignment, calling for the corresponding of local needs to available technologies and resources. There is much less experience with creating EMR systems for the developing world. Requirements, priorities and local constraints are less understood and are more varied. Some environments in the developing world are similar to a European or US healthcare environment and therefore can use similar software but other environments have very limited resources [12]. It is highly unlikely, therefore, to find a single EMR architecture and implementation that will fit all environments and needs. A handful of projects in developing countries have now met the test of actual implementation in such settings and are in day-to-day use. Advantages of EMR systems [12]: Improvement in legibility of clinical notes Decision support for drug ordering, including allergy warnings and drug incompatibilities Reminders to prescribe drugs and administer vaccines Warnings for abnormal laboratory results Support for programme monitoring, including reporting outcomes, budgets and supplies Support for clinical research Management of chronic diseases such as diabetes, hypertension and heart failure Disadvantages in implementing EMR systems [12]: User problems: Lack of user training Poor initial design limiting capabilities and expansion potential Systems are difficult to use or too complex Lack of involvement of local staff in design and testing of systems Lack of systems and staff training to ensure data quality and completeness Lack of perceived benefit for users who collect the data Dependence on one individual champion Technical problems: Lack of back-up systems in event of computer loss Poor system security leading to viruses and spyware Unstable power supplies and lack of battery back-up Poor or inadequate data back-ups Lack of technical support staff and/or system difficult to maintain Existing EMR systems in rural areas AMRS, Kenya [13] Indiana University School of Medicine and Moi University School of Medicine have been collaborating for over 15 years. In February 2001, this collaboration led to the Mosoriot Medical Record System (MMRS). The MMRS was installed in a primary care healthcare centre in rural Kenya. ~ 4 ~

10 In November 2001, the MMRS software was adapted to support the AMPATH (Academic Model for the Prevention and Treatment of HIV/AIDS) project and renamed to AMRS. The system is designed to have two networked computers running Microsoft (MS) Access TM, powered by a UPS with solar battery back-up. For the AMPATH project, the network has expanded to seven networked computers linked to a single MS Access database. In the MMRS, patients are registered in the system upon arrival, travel through the clinic with a paper visit form, and present the visit form as they depart. Clerks then perform the registration and transcribe visit data. AMRS data are collected on paper forms at each visit, delivered to a central location for data entry, and then returned to the patient s paper chart. MMRS provides both patient registration and visit data collection functions. Data are collected on all patients seen in the medical clinic, including their laboratory results and medications. AMRS supports comprehensive HIV care as well as mother-to-child transmission prevention, while serving as a rich database for quality improvement and answering research questions. The growing AMRS and MMRS databases serve both clinical and research needs, generating clinical summary reports for providers and providing a centralised source of data for epidemiological research The HIV-EMR system, Haiti [14] Since 1999, Partners in Health (PIH) has run a community based HIV treatment programme in Haiti with its sister organisation Zanmi Lasante, expanding to seven public health clinics in an area with virtually no roads, electricity or telephone service. Based on the PIH EMR that was implemented in Peru, satellite-based internet access at each site supports and web communication. Open source web system backed by an Oracle database (the same as the PIH EMR) with an additional offline client for data entry and review. The system is bilingual with English and French. With data entry doctors enter case histories and medications directly, whereas technicians enter laboratory results and pharmacists enter stock records. History, physical examination, social circumstances and treatment are also recorded. Decision support tools provide allergy and drug interaction warnings, and generate warning s about low CD4 (cluster of differentiation 4) counts. An offline component of the EMR was developed to overcome unreliable internet communications in some sites. This allows data entry and case viewing when the network is down, and has proven to be reliable and popular with clinical staff. The HIV EMR shows the feasibility of implementing a medical record system in remote clinics in a remote area with virtually no infrastructure and limited technical expertise Careware, Uganda [15] A team at the US Department of Health and Human Services has developed a medical record system to support HIV treatment via the Careware system. The system is designed to have a stand-alone database built with MS Access. It provides comprehensive tools for tracking HIV patients and their treatment, including clinical assessment, medications and billing data. It is widely used in health centres and hospitals in the US, and has recently been internationalised and deployed in Uganda in October Careware is an example of a US-based stand-alone EMR that is being adapted to developing country environments. An internet-accessible version that is under development will allow local data entry offline Organisational and user issues Data quality and completeness are critical to the success of any information system. Achieving high standards is a particular challenge in sites with limited computer literacy and experience. It is important to design systems that are easy to use and have good instructions and training. The system should collect the minimum data necessary for the task, and data items should be structured and coded where possible to simplify data checking and optimise reuse [12]. This does not mean that free text must be excluded; doing so prevents the system from capturing any data that do not fit the normal pattern. Such data will either be lost or recorded in hard-to-locate paper records. Structured data such as laboratory test results might benefit from double entry. In some projects ~ 5 ~

11 physicians and other staff enter data directly. This has the advantage of avoiding transcription errors, and also allows order entry systems to be deployed to check for potential medical errors. A well-trained local data manager is fundamental in maintaining data quality [12]. Maintaining regular communication with users through a data manager and meetings can also prove to be important in maintaining data quality. Prompt and effective help to users is a vital factor in generating support and ensuring widespread use of an EMR system. Low literacy contributes to inconsistent spelling of patients names and addresses. Search tools can be used to match similar names and age, gender and address, and either merge the two records or the details to the users for advice. Use of patient ID cards has also been helpful in several projects in Africa. A WAN system can be valuable in enforcing a single unique identifier across sites Data security and confidentiality Views of medical data security and confidentiality vary in different developing countries. In some countries the use of electronic databases is treated with great suspicion, in other countries the staff think nothing of ing sensitive medical data. Patients can face serious risk if their communities discover their HIV status or other sensitive medical information [12]. It is imperative that healthcare providers protect this information using data encryption, which is key aspect of Shelley Petzer s component in this project. 2.2 Lossless Compression algorithms Lossless data compression and decompression algorithms can be efficiently implemented on a mobile device, even with the hardware limitations such as low processing power, static memory and battery life [1]. Lossless data compression has many advantages on a mobile device, such as reducing the network bandwidth required for data exchange, reducing the disk space required for storage and minimizing the main memory required [3]. This section describes four commonly used lossless compression algorithms used today. Lempel-Ziv 77(LZ77) and Lempel-Ziv-Welch (LZW) which use dictionary methods to reference the upcoming data in order to match the exact data that has already been encoded. Prediction with Partial Match (PPM) which is a statistical data compression algorithm based on context modelling and prediction [3] and Burrows-Wheeler Transform (BWT) which on its own does not reduces the size of the data, it only makes the data easier to compress [4]. These algorithms are then compared using benchmark tests to find which one is optimal for implementing on mobile devices Lempel-Ziv 77 The Lempel-Ziv 77 (LZ77) lossless compression algorithm is used as the foundation for compression tools, such as GZip [5]. The algorithm is asymmetric, with time and memory, because encoding is much more demanding than decoding. The LZ77 algorithm uses data structures like binary trees, suffix trees and hash tables which provide fast searching without the need for high memory [6]. LZ77 compresses data by replacing sections of the data with a reference to matching data that has already passed through both the encoder and decoder [3]. No searching is needed when decompressing the data because, the compressor has issued an explicit stream of literals, locations, and match lengths [7]. The process becomes even more efficient if the window is stored entirely in the cache, so that retrieving a match is fast no matter where it occurs in the window [7]. The LZ77 algorithm works by maintaining a current pointer into the input data, a search and a look-ahead buffer. Symbols that are found before the current symbol make up the search buffer, whereas symbols that appear after the current symbol are placed in the look-ahead buffer. The buffers make a window which shows the section of input currently being viewed. As the current pointer moves forward the window moves through the input. While symbols are found in the look-ahead buffer, the algorithm looks in the search buffer for the longest match [7]. Instead of send off of the matched symbols, they are encoded with the offset from the current pointer, the size of match and the symbol in the look-ahead buffer that is following the match. The encoder and decoder must both keep track of the last 2KB or 4KB of the most recent data. The encoder needs to keep this data to ~ 6 ~

12 look for matches, while the decoder needs to keep it to understand the matches the encoder refers to. The LZ77 provides option to increase the window to improve performance. With a larger window there are improvements in the speed in which matches are found but to the cost of memory Lempel-Ziv-Welch The Lempel-Ziv-Welch (LZW) algorithm was introduced for cases in which a match cannot be found using LZ77. Instead of the sliding buffers, LZW uses a separate dictionary which is used as a codebook [8]. From the input stream the compressor builds its dictionary from the input data. When a group of symbols is found the dictionary is then checked. The longest prefix that matches the input is encoded and the unmatched symbol which follow are then added to the dictionary, see Figure 2 below for an example of LZW compression. The decompressor will then build a dictionary so that it can receive the indices that refer to the same symbol that are in the compressor s dictionary. Input Stream: AAAABAAABCC Encoded String Output Stream New Dictionary Entry A AA AA AAA A AB B BA AAA AAAB B BC C CC C Figure 2 - Example of LZW Compression This algorithm provides a quick build up of long patterns that can be stored, but there are multiple downfalls. Until the dictionary is filled with large commonly seen patterns, the resulting output will be bigger than the original input. Since the dictionary can grow without bound, LZW must be implemented so it deletes the existing dictionary when it gets too big or finds a way to limit memory usage [7]. This algorithm has no communication overhead and is computationally simple. Since both the compressor and the decompressor have the initial dictionary, and all new entries into the dictionary are created based on entries in the dictionary that already exists, the decompressor can recreate the dictionary quickly as data is received. To decode a dictionary entry the decoder must have received all previous entries in the block [5] Prediction with Partial Match Prediction with Partial Match (PPM) is a changing statistical data compression algorithm that uses context modelling and prediction [9]. It uses a fixed context statistical modelling algorithm, which predicts the next character in the input data. The prediction probabilities for each preceding character in the model are calculated from frequency counts which are updated regularly. The symbol that occurs is encoded in relation to its predicted probability, using arithmetic coding. Although PPM is simple, it is also computationally intensive [3]. An arithmetic encoder can use the probabilities to code the input efficiently. Longer contexts will improve the probability, but will take more time to calculate. To deal with this, escape symbols are created to slowly reduce context lengths. This creates a downfall were encoding a large string of escape symbols can use up more space, which would of been saved by the use of large contexts. Storing and searching through each context is the reason for the large memory usage of PPM algorithm [7]. With the PPM algorithm a table is built for each order, from 0 to the highest order of the model. After parsing the input into substrings, the context for each substring is the characters that come before the sub-string. The table keeps count of the frequency of each substring that has been found for the given context [10]. ~ 7 ~

13 When the PPM algorithm is used, it searches the highest-order table for the given context. If the context is found, the next character with the highest frequency count is returned as the prediction. If there are no matches to any entries in the table, the context is reduced by one character and the next lowest-order table is searched. This process is repeated until the context is matched or the Zeroth-order table is reached. The Zeroth-order table simply returns the most common character seen in the training string [10] Burrows-Wheeler Transform The Burrows-Wheeler Transform (BWT) is a reversible algorithm that is used in the bzip2 compression algorithm [4]. On its own, BWT does not reduce the size of the data, it only formats the data so it becomes easier to compress by other algorithms. When a string of characters is transformed using the BWT the size of the characters remain the same, the algorithm just calculates the order that the characters appear in. If the input string has multiple substrings that have a high frequency of appearing, then the transformed string will have multiple locations in which a character will recur several times in a row. This helps with compression, since most compression algorithms are more effective when the input contains sets of repeated characters. After the BWT is completed, the data is then compressed by running the transformed input through a Move-to-Front encoder and then a run-length encoder. The BWT takes advantage of symbols which are located further on in the string, not just those that have passed. The biggest problem is that the BWT requires the allocation of RAM for the entire input and output streams and a large buffer is needed to perform the required sorts [5]. Even though BWT-based compression could be performed with very little memory, common set-ups use fast sort algorithms and data structures that need large amounts of memory to supply speed [7]. Regardless of memory issues, algorithms that implement the BWT compress files at a high compression ratio Burrows-Wheeler Transform The Burrows-Wheeler Transform (BWT) is a reversible algorithm that is used in the bzip2 compression algorithm [4]. On its own, BWT does not reduce the size of the data, it only formats the data so it becomes easier to compress by other algorithms. When a string of characters is transformed using BWT the size of the characters remain the same, the algorithm just calculates the order that the characters appear in. If the input string has multiple substrings that have a high frequency of appearing, then the transformed string will have multiple locations in which a character will recur several times in a row. This helps with compression, since most compression algorithms are more effective when the input contains sets of repeated characters. After the BWT is completed, the data is then compressed by running the transformed input through a Move-to-Front encoder and then a run-length encoder. BWT takes advantage of symbols which are located further on in the string, not just those that have passed. The biggest problem is that the BWT requires the allocation of RAM for the entire input and output streams and a large buffer is needed to perform the required sorts [5]. Even though BWT-based compression could be performed with very little memory, common set-ups use fast sort algorithms and data structures that need large amounts of memory to supply speed [7]. Regardless of memory issues, algorithms that implement BWT compress files at a high compression ratio. 2.3 Evaluation of Algorithms Four comparison tests were run on tools that implement the algorithms [7]. LZO and Zlib were used to test the Lempel-Ziv 77 (LZ77) algorithm, Compress was selected to test the Lempel-Ziv-Welch (LZW) algorithm, PPMd (also known as winrar) was used to test t Prediction with Partial Match (PPM) algorithm and finally bzip2 was chosen to test the Burrows Wheeler Transform (BWT) algorithm. When benchmark comparison using traditional metrics are run on the above tools, the follow graphs are produced. ~ 8 ~

14 Figure 4 - Results from Compression Ratio Test Figure 3 - Results from Static Memory Test Figure 6 - Results from Completion Time (Web) Tests Figure 5 - Results from Completion Time (Text) Tests Analysing the graphs produced by benchmark tests shows that the algorithm that gives the best compression ratios is PPM followed by BWT. These ratios, however, come at a great cost to both time and memory, and these are resources that are not in abundance on mobile devices. The fastest of the four algorithms in both text and web is LZO, which uses the (LZ77) algorithm. Even though LZO is quick and uses the least static memory, it does provide the worst compression ratio when compressing and decompressing text. From these results, the LZ77 algorithm was chosen to be the algorithm that was implemented. Based on a design similiar to LZO, a lossless data compression/decompression application for a mobile device is being designed that is quick, requires low processing power and memory. These three attributes will also help preserve the battery life of the mobile device by not requiring a large amount of time and processing power to complete. Although LZO has a weak text compression ratio, it does provide a good web compression, which will help in compressing XML formatted files and if the user needs to download and update alot of data across the network. ~ 9 ~

15 Chapter 3 Design 3.1 Design Aims As discussed in the Chapter 1 the aim of this project is to investigate whether a medical record can be securely and effectively stored on a mobile device. That is, can a mobile device with its limited specifications handle complex compression and encryption algorithms? The implementation of the proposed design is an integrated system, which allows for a medical record to be securely transferred from a central database on a computer to the mobile device. Once on the device the system will then compress and encrypt the record for safe storage. This is to allow medical patients to keep their own medical records safe and be able to easily transfer the electronic record between doctors. Measuring the effectiveness of this system in terms of how well it handles the algorithms, will answer the research question previously stated. This means that the experimental design is important in achieving the project aims. To this end, a system was designed such that a set of tests could be run and results evaluated. The test process that will be run on the implemented system is described in detail in Chapter 5.2 of this report Design Constraints The system that is being designed needs to consider the limited resources that are available on a mobile device, as well as the location for which the application is being designed. Since the application is being designed for a rural or developing area with a low income rate, the application needs to run on a simple, inexpensive phone. The people who will be using the application will not have the highest technological literacy so the application needs to be user friendly and easy to use. Therefore the main design goals for the system are that it needs to be quick, able to compress large data files and have a simple easy to use interface all within the small about of CPU and memory that is available. 3.2 Design Process Before a testable system can be implemented it needs to go through a design process, so that we know exactly what needs to be implemented and how it will all fit together. The process, through which the design of the system was constructed, consisted of the analysis of mhealth and openmrs medical systems already in place as well as others that were discussed in chapter 2. As well as meeting with Cell-Life which is a company that aims to improve the lives of people infected and affected by HIV in South Africa through the appropriate use of mobile technology. The different aspects of the system will be discussed in this section, which also justifies the choices that we made while designing the system Medical Data Before designing the system we needed to know exactly what the input for the system would be. To avoid the process of acquiring ethical clearance and trying to get sensitive information from real patients, we decided to use pseudo medical records. These pseudo medical records are created by piecing together parts from other anonymised medical records. They provide the data that we need without compromising the privacy of a patient. The first attempts at getting pseudo medical records came from openmrs. Seeing as openmrs is already an established medical service, the records that they have would have provided us with data that is in a standard medical format and provided good data to test out system. However the medical records were contained in a mysql database and could only be accessed with the openmrs web service. After downloading the web service we found that it requires a lot of coding with java ~ 10 ~

16 jdbc to extract specific elements from the database. This attempt was proven to be fruitless since we needed a large data file such as personal history to test our system, whereas openmrs only provided small details on request. After the first attempt was unsuccessful we then set up a meeting with Sarah Brown [[email protected]] and Simon Kelly [[email protected]] from Cell-Life to try get a better understanding of what is needed in a medical record and to see if they knew of any pseudo medical records that we could use for this project. Cell-Life is a non-profit organisation that provides technology-based solutions for the management of HIV and AIDS and other infectious diseases such as TB. Cell-Life's primary function is to address health-related logistical challenges in developing countries, such as the provision and distribution of anti-retroviral treatments, continuous patient monitoring and evaluation, and collection and communication of relevant data. This is achieved through the use and development of software supported by existing technologies such as mobile phones and the Internet. Cell-Life has had a lot of experience working with medical records on mobile devices since they have been running since Sarah informed us that the key data elements that a medical record must have are basic personal identifiers. These key elements include: Patient folder ID, Date of birth, gender, first and last names. This data is essential because using those key items the Doctor/Nurse can identify the patient and retrieve the needed data. She also stated that the most import data to keep would be medical history as well as blood results. The reason for this is that the doctor can easily see from the history and blood results what is wrong with the patient and what the patient has been treated for. Simon who is a developer for Cell-Life informed us that using XML format would be to our advantage. Since XML is the best format for transferring medical records between the computer and mobile device and with the tags in XML, it would be easy to search through and store the different section of the record. Both Cell-Life developers had concerns about storing the medical records on the limited space of a SIM card. Due the fact that the more information stored helps the doctors with their diagnostics, combine with the fact that in developing countries a lot of households would share one phone, there needed to be a way to securely store multiple records on the phone. From the information obtained from Cell-Life we were able to find Records For Living. Records for living are an online service that provides the user with the ability to create and store Personal Health Records securely online. Personal Health Records (PHR) is an online copy of the user s medical information, collected from all of the user s doctors and hospitals. The PHR is also customisable to allow other information to be added that doctors are usually unaware of such as dietary habits, patient s symptoms and reactions to medications. To keep patients confidentiality we would not be using real medical records, but rather samples that are provided on the site. We obtained two samples records from Records For Living a 500kb, 800kb and with that we artificially generated a 1mb record by adding in extra data to the fields. The reason for obtaining different size files is to see how our system works with bigger input data System Architecture As described in section 1.3 of the Introduction chapter the system is divided into two projects that are integrated to form the complete system. The first project was designed and implemented by Shelley Petzer and deals with security components. This involves finding a secure way to transfer the medical record from the computer to the mobile device via Bluetooth. The reason for using Bluetooth is that it doesn t cost anything to transfer the data across this medium unlike 3G or GPRS. Bluetooth was also chosen for the fact that nearly all low range phones come standard with Bluetooth. Shelley s project will also look into encrypting the medical records on the computer and mobile phone using different encryption algorithms. The second part of the project which will be discussed in this paper deals with the compression and storage components. Once the medical record has been transferred to the phone the system will compress the medical records using ~ 11 ~

17 different compression algorithms. The algorithms that were chosen for this system where based on the results found in 2.3 of the background chapter, this is discussed in more detail in section Another component to this project is storage on the mobile devices either on the SIM, SD card or on the devices internal memory. The design for this component is discussed in more detail in section The way the two projects will be integrated is by the data that is passed between the two projects. The system is designed so that the medical record comes from the medical professional s computer in XML format, it will then be encrypted in to a byte array and transferred on to the mobile device. On the mobile device the byte array is decrypted back to XML for compression. The reason for the decryption once on the mobile device is that the record can then be verified with the digital signature, this allows for us to check that the record was sent correctly. The XML is then pasted to the compression section in which it is compressed into a zip format. The zipped file is then passed to Shelley s encryption class which encrypts the zipped file for storage using a different key. The system is also designed so that encrypted medical records stored on the phone can be decompressed and decrypted back on to the computer System Interface The first concept of the project involve creating a system where a users can use a mobile device to view and securely store their medical records and if need be send them to another device. So an interface was designed to allow the user to easily navigate through the medical record as well as send off the necessary data. Figure 7 - Application s Start Screen Figure 8 - Application s Main Menu ~ 12 ~

18 The diagrams above show first concept design to the interface menus. The start screen Figure 7 gives the user the ability to login in to their account. This is to deal with the issue that more than one family member will use the mobile phone. Once logged in the user can choose to view their record or send and receive a record to a doctor. Figure 9 - Layout of View Record Screen Figure 9 above shows the layout when the user is viewing the record. Since mobile devices have small screens the information needs to be displayed as big as possible. To navigate the different sections of the medical records buttons are placed at the bottom of the screen. The user can then horizontally scroll through the buttons until they find their desired selection. However since this project is an experimental project to test the different algorithms on a mobile device, the user interface design was outside the scope of this work. Since we will have no user testing and only need an interface to run and test the various algorithms, a simple interface was designed. This basic interface that would need to be implemented to satisfy our goals would contain one main screen and a six buttons, which would allow us to compress and decompress a medical record using different algorithms. The plan to make this system into a working product for users is still in the pipeline and based on the result of this project will be discussed in section future work. ~ 13 ~

19 3.2.4 Algorithms The algorithms that were chosen to be implemented in this project were based on the results from the evaluation that was done in 2.3 of the background chapter. Since the project is being developed for low ranged phones we ll be using the dictionary compression algorithms to test compression on a mobile device. Even though the dictionary algorithms LZ77 and LZW did give the worst compression ratios, they did run faster than the other algorithms by at least three seconds while using the least amount of static memory as seen in Figure 2.3 in chapter 2. These properties of these algorithms are beneficial since there are limited resources on the phone and we want to make sure that the system does not kill the battery from being too resource heavy. The algorithms implemented in this project are LZ77, LWZ and DEFLATE. LZ77 & LZW are commonly known lossless data compression algorithms that were discussed in the background chapter. DEFLATE was chosen to be implemented since this algorithm is newer and uses a combination of LZ77 and Huffman coding to achieve compression. Huffman coding uses a specific method for choosing the representation for each symbol, resulting in a code, sometimes called prefix-free codes, that is, the bit string representing some particular symbol is never a prefix of the bit string representing any other symbol. The prefix code expresses the most common source symbols using shorter strings of bits than are used for less common source symbols. Huffman was able to design the most efficient compression method of this type: no other mapping of individual source symbols to unique strings of bits will produce a smaller average output size when the actual symbol frequencies agree with those used to create the code. DEFLATE is a good algorithm to test because using Huffman coding on top of LZ77 should yield better compression ratios, and will also tell us how this will affect the performances on the mobile device Storage options Medical records and other personal information need to be stored within the limited resources of mobile device. For the system we needed to find a way to store the data in such a way that it is easy to access but also provide the necessary security for the sensitive information. The options that we came across in our research were the phones internal memory, the SIM card and the SD card. At first we thought that ideally the records should be stored on the internal memory of the mobile device. This way the record can t be transferred from one phone to another without the user s knowledge and can only be accessed by someone with the correct permission. The downside to this option and the reason that we didn t go with internal memory is that it only works it the mobile device is on or in a working condition. If the mobile devices battery dies there is no way to access or get the data off the phone until it is charged, as well as if the phone breaks that information could be lost. As a result we then looked into card options. The SIM card provides a good solution since it not removed or swapped around as often as the SD card. A Meeting was arranged with Hugo Roux an employee of Clickatell that has experience working with SIM cards. He informed us that although there is a lot of security on the SIM cards the data is hard to access due to the fact that there are multiple service providers and each of them uses different ways to encode the data on their SIM cards. It then becomes difficult to create software that can handle all forms of encoding that are used. The service providers are also very strict when it comes to accessing the SIM card and only allow for about 10kb of data to be stored. This makes SIM cards not a viable option since we will be storing files that are going to be bigger than 100kb. So the option that we will implement in the system is storing the medical records on the SD card. This provides an easy way of storing the data and if the phone battery dies or the phone breaks the SD card can be removed and the user can still have access to their medical record. It also provides another way of transferring the medical record to the doctor in case Bluetooth is not operational in that area. The only concern we had with using SD cards is that in developing areas SD cards are often ~ 14 ~

20 swapped to share files like photos and music. This means that sensitive information could be accidentally leaked if all the contents of the card were copied to another. We have solved this problem with Shelley s encryption, so even if someone gets access to the encrypted file it can only be read by our system and that users pin Software and Hardware needed for implementation The system will be implemented on a mid-tier mobile phone. This type of device was selected for the reason that the application is designed to mainly be deployed in developing areas where there is a low income rate. These devices are affordable and provide the needed specifications to run the complex encryption and compression algorithms that are necessary Hardware The phone that was going to be used for the first implementation of the system was a Samsung E250. The Samsung E250 mobile was introduced in 2006 as an entry level version of the Samsung D900, it had similar features but at a lower cost. This phone was chosen because it has good performance powered by an ARM9@230MHz processor, 10MB internal memory and has support for SD cards. The Samsung E250 also came with Bluetooth 2.0 with stereo A2DP and was affordable selling at around R350. While still in the process of setting up the Samsung E250 for development, we learnt of a new entry-level android smart phone that was being sold for $100 (around R700) the IDEOS. The Huawei U8150 IDEOS runs Android 2.2 and is powered by a 528MHz processor with256mb RAM. It features a 2.8" touch screen, Bluetooth 2.1 and has a microsdhc card slot. With the knowledge of this new smart phone being released we decided to change our target phone to the IDEOS. The reason for this is that although the IDEOS does cost more, the trend is moving towards smart phones being released at very low prices making then much more affordable. This means that entry-level phones with good performance will be released and replace the mid-tier phones of today, so looking to develop for the future we made the switch Software For programming the application on the Samsung E250 we planned to use Java ME. Java ME is a Java platform designed for embedded systems, such as mobile devices. Java ME was designed by Sun Microsystems, which is now a subsidiary of Oracle Corporation. The platform replaced a similar technology PersonalJava. Java ME devices implement a profile. The most common of these are the Mobile Information Device Profile aimed at mobile devices. Profiles are subsets of configurations, of which there are currently two: the Connected Limited Device Configuration (CLDC) and the Connected Device Configuration (CDC). The Connected Limited Device Configuration (CLDC) contains a strict subset of the Java-class libraries, and is the minimum amount needed for a Java virtual machine to operate. CLDC is basically used for classifying myriad devices into a fixed configuration. A configuration provides the most basic set of libraries and virtualmachine features that must be present in each implementation of a J2ME environment. Designed for mobile phones, the Mobile Information Device Profile includes a GUI, and a data storage AP. Applications written for this profile are called MIDlets. Almost all new mobile phones come with a MIDP implementation. The IDEOS runs Android 2.2, Android is a software framework for mobile devices, developed by The Open Handset Alliance, and released by Google. It consists of an operating system, certain applications, as well as a Software Development Kit (SDK). The core of the Android OS is based on a Linux kernel. The SDK allows Android developers to develop Android applications using the Java programming language. Android is an Open Source platform, and hence allows anyone to develop Android applications for Android devices. Moreover, since Android is based on a Linux kernel, it supports the running of Linux binaries and scripts. Java ME and Android were chosen because they ~ 15 ~

21 allowed us to program in JAVA a language we are comfortable with and won t have to waste time to learn a new language. 3.3 Design Summary This chapter discusses the design options and choices that were made in designing the system, highlighting the reason for those choices. The chapter also runs through the components that will be needed and those that will not have any effect on the project goals. From this chapter we see that the system will be programmed in Java on a smart phone, which runs Android 2.2 and will have a basic interface that will allow the algorithms to be easily executed. The system will take an XML medical record and compress it using a variety of dictionary lossless compression algorithms, and then securely storing the medical record on the phones SD card. The final design of this system will serve as the entry point to how the system will be implemented. ~ 16 ~

22 Chapter 4 Implementation The project was implemented as an android application, following the design plan that was made in the design chapter. The application was developed using the Android SDK in conjunction with the Eclipse IDE and the Android Development tools (ADT) custom plugin. The ADT provides us with a powerful integrated environment in which to build this application, while extending the capabilities of the Eclipse IDE. These tools were used because we were new to android development and it was recommended as the fastest way to get started with Android. The Android SDK also provided a USB Driver for Windows. This allowed us to run and debug the application on an actual phone, which means that we were able to test and debug the application on a real Android phone instead of the emulator. After setting up the environment we selected to develop for Android platform 2.2. The reason for this is based on Figure 10 below; the pie chart is based on the number of Android devices that have accessed the Android Market. We can clearly see that the majority of android users are using the Android 2.2 platform [16]. Figure 10 - Android Platform Distribution In this chapter we will discuss the structure of the application that was implemented and the classes involved. We will go through each class giving details of the aim of that class and the data structures that were used. As you can see from figure 4 below which shows the class hierarchy of the application, the program start with the main method which is used to start the application. The next tier which is all called from the main class is the Algorithms that will be tested in this project. The final tier performance which is called by all three algorithms is used to keep track of the application s performances for a give algorithm. ~ 17 ~

23 Figure 11 - Application s Class Hierarchy 4.1 Main The Main class as mentioned early is responsible for running the Application and extends the Activity class. An activity is a single, focused item that the user can use or interact with; in this case the activity is our Main Menu. The Main Menu is created with the method oncreate(bundle savedinstancestate), this is called when the activity is starting and is where the initializations go, such as setcontentview(r.layout.main)which tells the activity class to create a window in which we can place our user interface. The parameter R.layout.main refers to main.xml which is used to describe the layout of the activity and the objects which will be placed onto it. In the main.xml there are descriptions for six buttons which include their size and position on the screen. The six buttons are for the three algorithms, one to compress and the other to decompress. They are then set in the OnCreate method with OnClickListeners which will call the corresponding algorithm when pressed. 4.2 LZ77 When any of the LZ77 buttons on the Main Menu are selected this class is called. The LZ77 class is responsible either for reading in the input file compressing the data using the LZ77 algorithm and then outputting the data to a newly created compressed file; or to read in the compressed file then decode the compressed data and outputting the data to a created file that is identical to the original depending on which button was pressed. The LZ77 algorithm is described in chapter of this report and compresses data by replacing sections of the data with a reference to matching data that has already passed through both the encoder and decoder. When an LZ77 object is constructed its parameter is an integer which controls the maximum size of the search buffer. If no integer is specified the max buffer size is set to a default value of The ability to adjust the search buffer allows us to test how a different buffer sizes (the bigger the buffer the bigger the compression ratio) affects the performance on a mobile device. In this class there are three methods which get called two are public and are the methods for compression and decompression and the third is a private method to control the size of the search buffer. The Compress method is called with the file name that needs to be compressed as the parameter. The first task that the method has is to create the medical root directory (if it has not already been created) on the SD card and open up the FileReader and FileWriter to that directory. The method will throw an IOException in the event that the file that needs to be compressed cannot be found or ~ 18 ~

24 the program does not have access to write to the SD card. Then using the pseudo code shown in figure 12 below the method was implemented to read in the data from the medical record and compressed it, using a StringBuffer as a dictionary to search through the input for matches. If a match is found the data is compressed by replacing the match with a triple. The triple refers to three tags that will help the decoder find the original in the search buffer. These tags are the offset in the buffer, the length of the match and the character that follows the match. Once completed, the compressed output is flushed through the FileWriter to the newly created file and the streams are closed. A StringBuffer was used in this method as dictionary due to the fact that it would be constantly changing size and would be more efficient then using a string. However it is not the most efficient way, using a data structure such as a Tree or Hash-Map would have provided a quicker means of searching through the input. This would have increased the performance of algorithm but due to time constants could not be implemented. Figure 12 - LZ77 Compression Pseudo Code The decompress method is called with the file name that needs to be decompress as the parameter. This method starts the same ways the compression method did by opening the necessary FIleReader and FileWriter and throwing an IOException if a problem occurs. Since LZ77 is asymmetric, decoding is a lot simpler then encoding since decoding does not need to find the longest match in the dictionary. Using the pseudo code that is shown in figure 13 below the method was implemented to run through the compressed data and decode it back to its original state. This is achieved by finding the location of that string in the StringBuffer given by the offset and printing in out. The challenge with this method is that as the StringBuffer moves it has to move in the exact same way that it did in the compression method, else all the offsets in the encoded data will be off and when decoding you won t be able to retrieve the correct string and the decompressed file will not match that of the original. Like the compression method a StringBuffer was used as a dictionary to ensure that both buffers moved the same way, but the method could be improved with the aid of a Tree of hash-map data structure. ~ 19 ~

25 Figure 13 - LZ77 Decompression Pseudo Code Finally, the trimsearchbuffer method is a simple private method that is called inside the Compress and Decompress method. It takes in the StringBuffer that is being used and checks to see if it is bigger than the maximum buffer size that was set when the LZ77 object was created. If the StringBuffer is bigger than the maximum buffer size then the method deletes from the start of the buffer the difference between the buffersize and the maximum buffer size. This is to allow the buffer to slide through the input, as the buffer gets too big it deletes the beginning section to allow for more input data. 4.3 LZW The dictionary used in LZ77 is constantly in flux; it is a dictionary buffer whose content depends on the part of the message currently encoded. Therefore, if the message contains patterns that have already appeared and were shifted out from the dictionary buffer, the encoder has to output more triples (offset, length, and next char) than in the situation when these patterns are available. To avoid this limitation, the LZW class is implemented to maintain a dictionary that has the ability to keep entries permanently and is extended during the course of the encoding process. As mentioned in chapter LWZ uses a dictionary to search through the input. The longest prefix that matches the input is encoded and the unmatched symbol which follows are then added to the dictionary. To try keep the program as modular as possible all the compression algorithms in this program have the same basic structure, this is to allow for easy debugging and testing during implementation. Similar to LZ77 this class is called only when one of the LZW buttons are pressed. This class is responsible for compressing and decompressing a file using the LZW algorithm based on the button which was pressed. When a LZW object is created it allows for the starting size of the dictionary to be set with the parameter that is taking in. If no parameter is specified then the default dictionary is created with a size of 256, this value is chosen because 256 is the size of the ASCII table. Similar to LZ77 method we allow for the change in dictionary size to test how different dictionary sizes affect the performance on the mobile phone. In this class there are two public methods which get called Compress and decompress. The Compress method takes in a file name as a parameter and like the Compress method of LZ77, the first step is to create the medical directory on the SD card and open the necessary FileReader and FileWriter to that directory throwing an IOException if a problem occurs. Using the pseudo code ~ 20 ~

26 shown in figure 14 below the LZW method was implemented to compress the medical record. To achieve compression this method first creates a dictionary and fills it with the characters based on the starting size (default 256). Then runs through the input and checks if the codeword (the next character combined with the previous match) is in the dictionary or needs to be added, if the code word needs to be added to the dictionary it is also added to a results list which holds the location of the codeword in the dictionary. Once input has been compressed the results are then flushed to a newly created file and the file streams are then closed. There are two key data structures for this method one being a HashMap with a string and an integer parameter, the other is an ArrayList with String and Integer parameter. The HashMap is used as the dictionary for this algorithm with the two parameters being the key and volume pair that needs to be stored. This is the location in the dictionary giving by an integer and the string codeword at that position. The reason for using a HashMap instead Hashtable is that the HashMap is faster due to being unsynchronized and permits nulls. The Arraylist is used to store the results from compressing the medical record; it is a list of positions in the dictionary where the codeword can be found. The ArrayList is then iterated through using the index and printed out to the nearly created compressed file. We used an Arraylist over a Vector because it again is faster due to being unsynchronized and quick to traverse through. Figure 14 - LZW Compression Pseudo Code There was a problem when implementing this algorithm, although it ran in a good time the compressed file that was produced was bigger than the original file by 30%. This was due to the structure of the medical records that were being compressed and the way the system produced its results. The medical records which are in XML format have many different characters and a lot of unique tags, this means that there will be a lot of entries that need to be added to the dictionary. The system produced results in such a way that it was easy for the decoder to differentiate between two dictionary positions in the file. This was done by placing a, in-between each position so the output looked something like: 56,67,103,91 but the, added an extra character which causes an increase in file size. To correct for this the HashMap that was used in the first implementation was replaced with a ternary search tree (TST). The TST stores key-value pairs, where keys are strings and ~ 21 ~

27 values are objects. TST keys are stored and retrieved in sorted order, regardless of the order in which they are inserted into the tree. In addition, TSTs use memory efficiently to store large quantities of data. This would allow for bigger codewords to be stored which would reduce the amount of output produced. However due to time constraints this algorithm was not implemented correctly. Therefore, since this algorithm could not produces accurate results it was omitted from testing and would not be considered in the experiment. Figure 15 - LZW Decompression Pseudo Code 4.4 DEFLATE Like all the other algorithm classes in this program when the DEFLATE button is pressed it calls this class with a file name as a parameter. The class allows the data to be compressed or decompressed base on the button which was pressed. The two methods compress and decompress inside this class are structured similar to the ones in LZ77. They open the stream to file given as a parameter and compresses/decompress the data using the deflate algorithm, outputting the result to newly created file. The way these methods were implemented was using java s built in util ZipInputStream and ZipOutputStream. These methods were implemented this way because it will give us results for a fully optimised algorithm. The deflation algorithm used by zip is a variation of LZ77 that is combined with Huffman coding. It finds duplicated strings in the input data. The second occurrence of a string is replaced by a pointer to the previous string, in the form of a pair (distance, length). For compression match lengths are compressed with one Huffman tree, and match distances are compressed with another tree. The trees are stored in a compact form at the start of each block. A block is terminated when deflate determines that it would be useful to start another block with ~ 22 ~

28 fresh trees. Duplicated strings are found using a hash table. All input strings of length three are inserted in the hash table. A hash index is then computed for the next three bytes. If the hash chain for this index is not empty, all strings in the chain are compared with the current input string, and the longest match is selected. The hash chains are searched starting with the most recent strings, to favour small distances and thus take advantage of the Huffman encoding. The hash chains are singly linked. There are no deletions from the hash chains; the algorithm simply discards matches that are too old. To avoid a worst-case situation, very long hash chains are arbitrarily shortened at a certain length. So deflate does not always find the longest possible match but generally finds a match which is long enough. Deflate also defers the selection of matches with a lazy evaluation mechanism. After a match of length N has been found, deflate searches for a longer match at the next input byte. If a longer match is found, the previous match is shortened to a length of one and the process of lazy evaluation begins again. Otherwise, the original match is kept, and the next match search is attempted only N steps later. The lazy match evaluation is also subject to a runtime parameter. If the current match is long enough, deflate reduces the search for a longer match, thus speeding up the whole process. If compression ratio is more important than speed, deflate attempts a complete second search even if the first match is already long enough. For decompression deflate sets up a first level table that covers some number of bits of input less than the length of longest code. It gets that many bits from the stream, and looks it up in the table. The table will tell if the next code is that many bits or less and how many, and if it is, it will tell the value, else it will point to the next level table for which deflate grabs more bits and tries to decode a longer code. How many bits to make the first lookup is a trade off between the time it takes to decode and the time it takes to build the table. If building the table took no time then there would only be a first level table to cover all the way to the longest code. However, building the table ends up taking a lot longer for more bits since short codes are replicated many times in such a table. What deflate does is simply to make the number of bits in the first table a variable, and set it for the maximum speed. Deflate sends new trees relatively often, so it is possibly set for a smaller first level table than an application that has only one tree for all the data. For deflate, which has 286 possible codes for the length tree, the size of the first table is nine bits. Also the distance trees have 30 possible values, and the size of the first table is six bits. 4.5 Performance The last class in this program which is called from all three of the compression algorithms is the performance class. The main purpose of this class is to monitor the stats of each algorithm as they run. The stats which are monitored are execution time, the CPU percentage that is used and the RSS value which is used to see how much memory the process is taking up. To obtain this data the class first obtains the processes ID using Process.myPID(); then there are three methods which are called ReadMemory(), ReadCPU()and ReadTime(). ReadMemory() and ReadCPU() work by accessing the proc filesystem on the phone, The proc filesystem is a read-only pseudo-filesystem which is used as an interface to the kernel of the phone s data structures. When one of these methods are called it opens the stream to necessary file, for ReadMemory() the stream is opened to /proc/pid/status/ were PID is the ID of the process running and for ReadCPU() the stream is open /proc/stat/. That file s data is then printed out to a results file for later analysis. The ReadTIme() method works by having two calls to it while the compression algorithm is running one at the start and again at the end. When the method is called it just stores the time at those points, then to calculate the execution time it simply subtracts the start time from the finishing time and prints it out to a file. ~ 23 ~

29 4.6 Implementation summary In this chapter we discuss how the application was implemented and how the classes all fit together. Then this chapter goes into detail to describe how each class works and the data structures that were used for each class. Starting off with looking at the Main class to see how the user interface was created and laid out. The three compression classes that were implemented and the methods in each class were then looked at. In this part we see that due to time constraints that the LZW class could not be fully completed and thus will be omitted from testing, as well as the LZ77 class being implemented but not fully optimized which may have an effect on the test results. Finally the chapter describes the performance class and how it will be used to produce the test data. The next chapter will describe the test methodology and evaluate the results obtained from the experiment. ~ 24 ~

30 Chapter 5 Testing & Evaluation 5.1 Introduction In order to investigate the effects of the algorithms on the IDEOS (Android mobile device) and determine which one is most appropriate, a series of tests were performed. This was necessary as the hypothesis under question examines a quantitative topic. That is, in order to determine which algorithm is most efficient on the mobile device, one needs to consider aspects like CPU and Memory usage as well as the algorithm s execution time. The tests where conduct over a week, testing each algorithm and monitoring their performances as they ran. The design of these tests is described in more detail below. In addition the results are presented in Section 5.3, with a discussion of their findings provided in Section Test Methodology Independent and dependant variables In order to investigate the efficiency of the algorithms on the mobile devices, three independent variables were used. These include the algorithm, buffer size and the input file size. There are two algorithms which will be tested LZ77 and DEFLATE, since the third algorithm LWZ could not produce viable results it will not be tested. The Buffer Size consists of three buffer sizes that the algorithms will have 512, 1024 and 2048; these values are used to see how they affect the compression ratio and the execution time of the program. Finally, the input file size s that were investigated to see how the program handles different size data and will be test with a 500kb, 800kb and 1000kb medical record. In all cases, the dependant variables were the CPU usage, Memory usage, Compression ratio and Execution time. These values are the main points that will be used to determine the efficiency of the algorithm. The CPU usage refers to the percentage of the phone s processor that is being used by the program while running. Memory usage consists of the resident set size (RSS) value that is used while the program runs. RSS is the portion of a process that exists in physical memory (RAM) measured in kilobytes (kb). The compression ratio is how well the algorithm was able to compress the file and is calculated by the compressed file size divided by the original file size. Finally, the execution time that is the length that the algorithm runs and uses the phone s resources Testing Design The test process was run using an IDEOS android phone running android platform 2.2 that was connected to a computer via USB cable. The process consisted of running the two algorithms nine times, once for each file size using each buffer size. So for example the LZ77 algorithm was run using the 500KB medical record for each buffer size of 512, 1024, and 2048 and repeated for both the 800KB and 1000KB medical record. The test process was completed ten times to increase the sample size and decrease the likelihood of random errors in the data. While the algorithms were running the performance class (Explained in chapter 4.5) was monitoring the dependant variables and printing them out to a file. After a quick analysis of the data that was obtained saw that the CPU and Memory that was being produced was for the entire phone system, which had a number of other background processes running. This made it hard to isolate the CPU and Memory usage for our program. To deal with this the problem and acquire data that we could use to effectively monitor the CPU and Memory usage, Linux s Top command was used. Top provides an ongoing look at processor activity ~ 25 ~

31 in real time and displays a listing of the most CPU-intensive tasks on the system (See Appendix A for output example). To obtain the data using Top we ran the top command in the Android Debug Bridge (adb) shell on the computer while the algorithm was running. This provided us with the necessary results for the length that the algorithm ran and stored it in a file on the phone s SD card. 5.3 Results This section lists all the results gathered from the project over a variety of different tests. The results from the tests that were stored on the mobile phone s SD card were taken and placed in to excel. There the ten results for each test were averaged out to acquire the data set that was used to produce the following graphs CPU Usage The following Graphs show the CPU usage over the time that the algorithm ran for both compression and decompression. Line graphs were chosen to display this data because they clearly show the range of CPU used and also makes it easy to compare the results against the three different buffer levels that were used. The results for the 1000KB file will be the only graphs that will be displayed in the main report because it was the most data and they best show the behaviour of Algorithms. The other graphs have been placed in Appendix B and have be moved there to avoid overwhelming the report with too many graphs. CPU (%) 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% LZ77 CPU Usage - Compressing 1000KB File Time (seconds) Buffer Size = 2048 Buffer Size = 1024 Buffer Size = 512 Figure 16 - CPU usage for LZ77 when compressing 1000kB file ~ 26 ~

32 70% DEFLATE CPU Usage - Compressing 1000KB File 60% CPU (%) 50% 40% 30% 20% Buffer Size = 2048 Buffer Size = 1024 Buffer Size = % 0% Time (1/10 seconds) Figure 17 - CPU usage for DEFLATE when compressing 1000kB file As it shows in the above graphs figure 16 and 17 for compression, LZ77 run for a much longer period of time compared to DEFLATE which was measured in tenths of a second. We also see that the range of CPU usage for the LZ77algorithm is around 60-85% whereas the Range for DEFLATE is around 35-50% with a few outliers. The most important thing to note on the above graphs is that logically as the buffer increases the time that the algorithm runs should also increase, this is due to the fact that the bigger the search buffer the more time is needed for each search. This is displayed perfectly with LZ77 with a 512 buffer level running for 295 seconds and with a 2048 buffer level it runs for 472 seconds. However DEFLATE shows the opposite, when DFLATE has a bigger buffer it runs quicker then when it has a smaller buffer level. From the graphs it clear to see that DEFALTE uses the least amount of CPU and run exceptionally quicker the LZ77. CPU (%) 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% LZ77CPU Usage - Decompressing 1000KB File Time (seconds) Buffer Size = 2048 Buffer Size = 1024 Buffer Size = 512 Figure 18 - CPU usage for LZ77 when decompressing 1000kB file ~ 27 ~

33 60% DEFLATE CPU Usage - Decompressing 1000KB File CPU (%) 50% 40% 30% 20% 10% Buffer Size = 2048 Buffer Size = 1024 Buffer Size = 512 0% Time (1/10 seconds) Figure 19 - CPU usage for DEFLATE when decompressing 1000kB file With the decompression graphs above figures 18 and 19 we see clearly the asymmetric properties of compression algorithms, where the decompression runs a lot quicker the compression algorithms. The LZ77 Decompression uses a similar but slightly higher range of CPU to that of the compression algorithm using 70-90% of the CPU. This can also be seen for DEFLATE which uses 40-50% of the CPU. Since Decompression doesn t have to search through the buffer for a match but simply decodes a match given the offset, it runs quicker given a bigger buffer size since more matches can be processed before the buffer needs to move as shown in the two graphs above Memory Usage The following Graphs show the average Memory usage that the algorithm used for both compression and decompression. Bar graphs were chosen to display this data because the change in memory over time was very small. Bar graphs also make it easy to compare the memory usage across the different input file sizes. The graphs below figure 20 and 21 show that the LZ77 algorithm increases in memory usage with an increase in buffer size and that the DEFLATE algorithm decreases in memory usage with an increase in buffer size. This is related to the run time of the algorithms since the increase in buffer reduces the run time of the DEFLATE algorithm it also reduces the amount of memory that is used on average. However there is an anomaly in the data the cause of which is unknown, with the 800 file size using a 1024 buffer size. The memory usage does not follow the pattern but rather drop significantly for both LZ77 and DEFLATE. But apart from this anomaly it is clear to see that the most efficient memory user is DEFLATE with a 2048 buffer. ~ 28 ~

34 LZ77 Compression Memory Usage RSS usage (kb) File Size =500 File Size = 800 File Size = Buffer Size Figure 20 - Average Memory Usage for LZ77 compression DEFLATE Compression Memory Usage RSS usage (kb) Buffer Size File Size =500 File Size = 800 File Size = 1000 Figure 21 - Average Memory Usage for DEFLATE compression The memory usage for the Decompression is displayed in the graphs below Figure 22 and 23 shows that with both LZ77 and DEFLATE, that they use on average more memory then with Compression. Decompression uses an average of 20600kB of memory whereas Compression only uses around 19500kB of the mobile device s memory. Decompression does not seem to follow a distinguished pattern but rather each buffer size has a similar memory usage, except for DELFATE which memory usage decrease with an increase in buffer size. As with compression it is clear to see that DEFLATE using a 2048 buffer size uses the least amount of memory and is the most effective when it comes to decompression. ~ 29 ~

35 RSS usage (kb) LZ77 Decompresion Memory Usage Buffer Size File Size =500 File Size = 800 File Size = 1000 Figure 22 - Average Memory Usage for LZ77 decompression RSS usage (kb) DEFLATE Decompresion Memory Usage Buffer Size File Size =500 File Size = 800 File Size = 1000 Figure 23 - Average Memory Usage for DEFLATE decompression Compression Ratios The following Graphs show the compression ratio that the algorithms when compressing a file with different buffer sizes. As with CPU usage the line graph was chosen because it shows clearly the compression ration of the algorithm as the size of the buffer increases. Points have also been added to the lines to show the discrete values that were obtained from the different buffer sizes. ~ 30 ~

36 Compression Ratio LZ77 - Compression Ratio vs Buffer Size Buffer Size Figure 24 - Compression Ratios achieved with LZ77 File Sile = 500 File Size = 800 File Size = 1000 Compression Ratio Deflate - Compression Ratio vs Buffer Size File Sile = 500 File Size = 800 File Size = Buffer Size Figure 25 - Compression Ratios achieved with DEFLATE The above graphs Figure 24 and 25 shows the result of increasing the buffer size while compressing a file. The compression ratio is worked out by dividing the output file size with the original file size; this means that the lower the compression ratio the more effective the algorithm is at compressing the medical record. The LZ77 graph shows with a small file and medium size buffer (greater than 1024) you can achieve at least a 50% compression. LZ77 however does struggle to compress big files, even with a buffer size of 2048 you can only achieve around a 20% compression. The main thing to take note of with the above graphs is that the way that the DEFLATE algorithm is implement, with Huffman Encoding added to the algorithm means that no matter how big or small the buffer size is the compression ratio is constant for a file size. The above graph also shows the effectiveness of DEFLATE algorithm that even a 1000KB you can still achieve greater than 50% compression ~ 31 ~

37 5.4 Evaluation of results As mentioned in the implementation summary (Chapter 4.6) LZ77 was not optimized at all and this can clearly be seen in the results. The LZ77 algorithm is very expensive in this implementation due to its high CPU usage and really long execution time. With regards to memory usage LZ77 does not do too badly and uses the same average as the DEFLATE algorithm which is around 19500kB. From the results obtain using LZ77 algorithm it seems that this algorithm is it not a good option for compressing the medical records. With its poor compression ratios and large CPU usage when compressing big files, this implementation of the algorithm would drain the mobile devices battery fairly quickly. For applications on mobile devices to be a success in developing areas it needs to be optimised to use as little battery as possible due to the scarce resources and charging faculties in those areas. For this main reason the DEFLATE algorithm in this situation is the best solution for the compression and decompression of medical records. Since the DEFLATE algorithm with a big buffer size runs the quickest at around 0.21 seconds and uses a lower range of the CPU between 35-50% as well as provides a good compression ratio, it is perfect to use in an application that will be deployed in developing areas. However since imported java classes such as DEFLATE are in bytecode it is difficult to access the code to see how the algorithm was implemented and which part allows the algorithm to run at its optimal level. The only assumption that can be made is that it manipulates the low level framework to be able to run at such speeds. ~ 32 ~

38 Chapter 6 Conclusion: The data compression field has always been an important part of computer science and it is becoming increasingly popular and important today. Although mobile phones have become faster and data storage has become less expensive and more efficient, the increase in the significance of large data, such as medical records, on a mobile phone drives the use of at least a small amount of data compression due to storage and transmission requirements. The question in many applications is now no longer whether to compress data, but what compression method should be applied. This project answers that question by designing and implementing a mobile application that can take a medical record, which has been transmitted via Bluetooth, and compress it for effective storage. The aim of this project was to investigate whether there is a difference in energy consumption when using different compression algorithms, as well as finding an effective way of storing the compressed data on the mobile device. In order to conduct this study an application was developed in Java that would allow for different algorithms to be used to compress a medical record, while monitoring the effects of the process on the mobile phone. The application was developed for the IDEOS, which is an entry level smart phone running the Android 2.2 platform. The first task in implementing the application was to investigate where the medical records were going to be stored. The options under consideration were the SIM card, an SD card and the mobile phone s internal memory. Since accessibility was considered a high priority the SD card was chosen as the most efficient storage medium. This was due to the other storage mediums offering too many limitations such as limited storage space and poor accessibility in the case of phone failure. Although the project implementation was under strict time constraints two out of the three algorithms were fully implemented and produced good test results. The two algorithms that were implemented were LZ77 and DEFLATE, these are two dictionary type compression algorithms that are well known for their low memory usage. A series of tests were run on the phone to analyse the performance of the algorithms. The tests that were run involved compressing three different file sizes (500kB, 800kB and 1000kB) with three different buffer sizes (512, 1024 and 2048) and monitoring the resources that were used. The more resources that the algorithm uses the higher the energy consumption will be for that algorithm. The results from these tests revealed that the LZ77 algorithm was very resource heavy, using a large percentage of the CPU for a long period of time. This was due to using basic data structures during implementation that work on a small scale but proved to be too demanding when compressing large medical records. Furthermore, the results shows that DEFLATE was very effective in compressing large medical records. This algorithm used very little CPU and Memory for a short period of time. These results lead to the conclusion that there is a difference in energy consumption when using different compression algorithms; however this is based on the way that they are implemented. Since DEFLATE is an important class and consists of byte code it is well optimized thus produces good results. The two main challenges encounter during the course of the project were the time constraints and initially not fully understanding the scope of the project. The time to complete the project was limited however with poor time management with respect to the project plan it resulted in the progress of the project falling behind schedule; this resulted in the poor implementation of the algorithms. The other challenge of not fully understanding the scope led to not knowing if the progress was on the right track or what our next steps should be. ~ 33 ~

39 These challenges have taught me that the understanding and designing of a project is vital to the project s success. Without a good foundation initially it is easy to get side tracked or do things that don t help solve the problem. They have also taught me that time management skills are important in order to fully complete large projects. Overall this project was successful because we were able to implement a system that encrypts and compresses medical records with low energy consumption, the compressed medical records are then stored effectively for easy access on the mobile phone s SD card. The limitations for this project are the small sample size when considering the number of variables and that only two algorithms were implemented. The test was only run ten times per condition which does help improve inconsistency but doesn t prove the integrity of the results and the conclusion could carry more weight if more algorithms were also tested. 6.1 Future Work There are two elements within this project that could be expanded on further in future work. The first of these relates to understanding the optimisation behind the DEFLATE algorithm. Seeing that the DEFLATE algorithm is byte code which we were unable to analyse in this report, getting a better understanding of the algorithm may help to improve it further. This understanding could also lead to a study to see if combining other encoding techniques could yield better results instead of Huffman Encoding. Another element of this project that could be expanded is the interface leading to this application being used by real patients. Since the compression component is completed and Shelley Petzer has completed the security component all that is needed is a good, easy to use interface to allow this application to be deployed. The interface will allow medical patients to not only send and receive their medical records but also view, update and make notes on them. This will allow them to take more interest in their own health care. ~ 34 ~

40 References 1 Nori, Anil. Mobile and Embedded Databases. (), SIGMOD '07 Proceedings of the 2007 ACM SIGMOD international conference on Management of data. 2 Midhun, M. Data Compression Techniques. Sree Narayana Gurukulam College of Engineering, Kolenchery, Sakr, Sherif. XML compression techniques: A survey and comparison. Journal of Computer and System Sciences, 75 (2009). 4 Lauther, Ulrich and Lukovszki, Tamas. Space Efficient Algorithms for the Burrows-Wheeler. Algorithmica 58 (2010). 5 Sadler, Christopher M. and Martonosi, Margaret. Data compression algorithms for energy-constrained devices in delay tolerant networks. SenSys '06 Proceedings of the 4th international conference on Embedded networked sensor systems (2006). 6 Ferreira, Artur, Oliveira, Arlindo, and Figueiredo, Mario. Time and Memory Efficient Lempel-Ziv Compression Using Suffix Arrays. arxiv: (2009). 7 Barr, Kenneth C. and Asanovic, Krste. Energy-Aware Lossless Data Compression. ACM Transactions on Computer Systems, Vol. 24, 3 (August 2006). 8 Wang, Le and Manner, J. Evaluation of data compression for energy-aware communication in mobile networks. Cyber-Enabled Distributed Computing and Knowledge Discovery, CyberC '09. International Conference on (2009). 9 Carus, A. and Mesut, A. Fast Text Compression Using Multiple Static Dictionaries. Information Technology Journal 9 (2010). 10 Burbey, Ingrid and Martin, Thomas L. Predicting future locations using prediction-by-partial-match. MELT '08 Proceedings of the first ACM international workshop on Mobile entity localization and tracking in GPSless environments (2008). 11 BURGSTEINER, H., AND PRIETL, J A Framework for Secure Communication of Mobile E-health applications. In Medical Informatics meets ehealth, Fraser, Hamish S.F. Biondich, Paul. Moodley, Deshen. Choi, Sharon. Mamlin, Burke W. and Szolovits, Peter. Implementing electronic medical record systems in developing countries. In Informatics in Primary Care, Volume 13, Number 2, June 2005, pp (14) 13 Siika, AM. Rotich, JK. Simiyu, CJ. An electronicmedical record system for ambulatory care of HIVinfected patients in Kenya. In International Journal of Medical Informatics 2005;74(5): Fraser H, Jazayeri D, Nevil P et al. An information system and medical record to support HIV treatment in rural Haiti. British Medical Journal 2004;329: Milberg J. Adapting an HIV/AIDS clinical information system for use in Kampala, Uganda. Proceedings of Helina 2003, Johannesburg 2003; Android Developers Resources. On ~ 35 ~

41 Appendices Appendix A This is an example of the output that was produced when Top was run on the adb shell while the LZ77 compression algorithm was executing. Results like this where produced every second until the algorithm had completed. PID CPU% S/R #THR VSS RSS PCY UID Name % R kB 19508KB fg app_82 hons.compress % R 1 912kB 440kB fg system system_server % S 1 684kB 344kB fg root gs_wq 55 0% S 1 0kB 0kB fg root aps_wq 61 0% S 1 0kB 0kB fg root aps_wq 85 0% S kB 260kB fg shell /sbin/adbd Appendix B This appendix contains the remaining graphs that where made from the results mentioned in Chapter 5.3. CPU (%) 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% LZ77 CPU Usage - Compressing 500KB File Time (seconds) Buffer Size = 2048 Buffer Size = 1024 Buffer Size = 512 Figure 26 - CPU usage for LZ77 when compressing 500kB file ~ 36 ~

42 LZ77 CPU Usage - Decompressing 500KB File 100% 90% 80% 70% CPU (%) 60% 50% 40% Buffer Size = 2048 Buffer Size = % Buffer Size = % 10% 0% Time (seconds) Figure 27 - CPU usage for LZ77 when decompressing 500kB file LZ77 CPU Usage - Compressing 800KB File CPU (%) 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Time (seconds) Buffer Size = 2048 Buffer Size = 1024 Buffer Size = 512 Figure 28 - CPU usage for LZ77 when compressing 800kB file ~ 37 ~

43 LZ77CPU Usage - Decompressing 800KB File 100% 90% 80% 70% CPU (%) 60% 50% 40% Buffer Size = 2048 Buffer Size = % Buffer Size = % 10% 0% Time (seconds) Figure 29 - CPU usage for LZ77 when decompressing 800kB file DEFLATE CPU Usage - Compressing 500KB File 60% 50% 40% CPU (%) 30% 20% 10% Buffer Size = 2048 Buffer Size = 1024 Buffer Size = 512 0% Time (1/10 seconds) Figure 30 - CPU usage for DEFLATE when compressing 500kB file ~ 38 ~

44 70% DEFLATE CPU Usage - Decompressing 500KB File CPU (%) 60% 50% 40% 30% 20% 10% Buffer Size = 2048 Buffer Size = 1024 Buffer Size = 512 0% Time (1/10 seconds) Figure 31 - CPU usage for DEFLATE when decompressing 500kB file 80% DEFLATE CPU Usage - Compressing 800KB File 70% 60% CPU (%) 50% 40% 30% 20% Buffer Size = 2048 Buffer Size = 1024 Buffer Size = % 0% Time (1/10 seconds) Figure 32 - CPU usage for DEFLATE when compressing 800kB file ~ 39 ~

45 70% DEFLATE CPU Usage - Decompressing 800KB File CPU (%) 60% 50% 40% 30% 20% 10% Buffer Size = 2048 Buffer Size = 1024 Buffer Size = 512 0% Time (1/10 seconds) Figure 33 - CPU usage for DEFLATE when decompressing 800kB file ~ 40 ~

Wan Accelerators: Optimizing Network Traffic with Compression. Bartosz Agas, Marvin Germar & Christopher Tran

Wan Accelerators: Optimizing Network Traffic with Compression. Bartosz Agas, Marvin Germar & Christopher Tran Wan Accelerators: Optimizing Network Traffic with Compression Bartosz Agas, Marvin Germar & Christopher Tran Introduction A WAN accelerator is an appliance that can maximize the services of a point-to-point(ptp)

More information

Analysis of Compression Algorithms for Program Data

Analysis of Compression Algorithms for Program Data Analysis of Compression Algorithms for Program Data Matthew Simpson, Clemson University with Dr. Rajeev Barua and Surupa Biswas, University of Maryland 12 August 3 Abstract Insufficient available memory

More information

LZ77. Example 2.10: Let T = badadadabaab and assume d max and l max are large. phrase b a d adadab aa b

LZ77. Example 2.10: Let T = badadadabaab and assume d max and l max are large. phrase b a d adadab aa b LZ77 The original LZ77 algorithm works as follows: A phrase T j starting at a position i is encoded as a triple of the form distance, length, symbol. A triple d, l, s means that: T j = T [i...i + l] =

More information

Adoption of Information Technology in Healthcare: Benefits & Constraints

Adoption of Information Technology in Healthcare: Benefits & Constraints Adoption of Information Technology in Healthcare: Benefits & Constraints A WiredFox Technologies White Paper 2 Adoption of Information Technology in Healthcare: Benefits & Constraints By Jeff Yelton 3

More information

Storage Optimization in Cloud Environment using Compression Algorithm

Storage Optimization in Cloud Environment using Compression Algorithm Storage Optimization in Cloud Environment using Compression Algorithm K.Govinda 1, Yuvaraj Kumar 2 1 School of Computing Science and Engineering, VIT University, Vellore, India [email protected] 2 School

More information

DISK DEFRAG Professional

DISK DEFRAG Professional auslogics DISK DEFRAG Professional Help Manual www.auslogics.com / Contents Introduction... 5 Installing the Program... 7 System Requirements... 7 Installation... 7 Registering the Program... 9 Uninstalling

More information

ScoMIS Encryption Service

ScoMIS Encryption Service Introduction This guide explains how to install the ScoMIS Encryption Service Software onto a laptop computer. There are three stages to the installation which should be completed in order. The installation

More information

Image Compression through DCT and Huffman Coding Technique

Image Compression through DCT and Huffman Coding Technique International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Rahul

More information

Remote Network Accelerator

Remote Network Accelerator Remote Network Accelerator Evaluation Guide LapLink Software 10210 NE Points Drive Kirkland, WA 98033 Tel: (425) 952-6000 www.laplink.com LapLink Remote Network Accelerator Evaluation Guide Page 1 of 19

More information

Sawmill Log Analyzer Best Practices!! Page 1 of 6. Sawmill Log Analyzer Best Practices

Sawmill Log Analyzer Best Practices!! Page 1 of 6. Sawmill Log Analyzer Best Practices Sawmill Log Analyzer Best Practices!! Page 1 of 6 Sawmill Log Analyzer Best Practices! Sawmill Log Analyzer Best Practices!! Page 2 of 6 This document describes best practices for the Sawmill universal

More information

Key Components of WAN Optimization Controller Functionality

Key Components of WAN Optimization Controller Functionality Key Components of WAN Optimization Controller Functionality Introduction and Goals One of the key challenges facing IT organizations relative to application and service delivery is ensuring that the applications

More information

Attix5 Pro Server Edition

Attix5 Pro Server Edition Attix5 Pro Server Edition V7.0.3 User Manual for Linux and Unix operating systems Your guide to protecting data with Attix5 Pro Server Edition. Copyright notice and proprietary information All rights reserved.

More information

SiteCelerate white paper

SiteCelerate white paper SiteCelerate white paper Arahe Solutions SITECELERATE OVERVIEW As enterprises increases their investment in Web applications, Portal and websites and as usage of these applications increase, performance

More information

EMR Benefits, Challenges and Uses

EMR Benefits, Challenges and Uses EMR Benefits, Challenges and Uses Benefits Our work has greatly benefited from using the PIH-EMR: it has simplified many tasks essential to high quality patient care, and has allowed us to perform important

More information

QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE

QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE QlikView Technical Brief April 2011 www.qlikview.com Introduction This technical brief covers an overview of the QlikView product components and architecture

More information

Introweb Remote Backup Client for Mac OS X User Manual. Version 3.20

Introweb Remote Backup Client for Mac OS X User Manual. Version 3.20 Introweb Remote Backup Client for Mac OS X User Manual Version 3.20 1. Contents 1. Contents...2 2. Product Information...4 3. Benefits...4 4. Features...5 5. System Requirements...6 6. Setup...7 6.1. Setup

More information

Base One's Rich Client Architecture

Base One's Rich Client Architecture Base One's Rich Client Architecture Base One provides a unique approach for developing Internet-enabled applications, combining both efficiency and ease of programming through its "Rich Client" architecture.

More information

Compression techniques

Compression techniques Compression techniques David Bařina February 22, 2013 David Bařina Compression techniques February 22, 2013 1 / 37 Contents 1 Terminology 2 Simple techniques 3 Entropy coding 4 Dictionary methods 5 Conclusion

More information

TANDBERG MANAGEMENT SUITE 10.0

TANDBERG MANAGEMENT SUITE 10.0 TANDBERG MANAGEMENT SUITE 10.0 Installation Manual Getting Started D12786 Rev.16 This document is not to be reproduced in whole or in part without permission in writing from: Contents INTRODUCTION 3 REQUIREMENTS

More information

Intellicus Enterprise Reporting and BI Platform

Intellicus Enterprise Reporting and BI Platform Intellicus Cluster and Load Balancer Installation and Configuration Manual Intellicus Enterprise Reporting and BI Platform Intellicus Technologies [email protected] www.intellicus.com Copyright 2012

More information

Data Reduction: Deduplication and Compression. Danny Harnik IBM Haifa Research Labs

Data Reduction: Deduplication and Compression. Danny Harnik IBM Haifa Research Labs Data Reduction: Deduplication and Compression Danny Harnik IBM Haifa Research Labs Motivation Reducing the amount of data is a desirable goal Data reduction: an attempt to compress the huge amounts of

More information

Arithmetic Coding: Introduction

Arithmetic Coding: Introduction Data Compression Arithmetic coding Arithmetic Coding: Introduction Allows using fractional parts of bits!! Used in PPM, JPEG/MPEG (as option), Bzip More time costly than Huffman, but integer implementation

More information

Upgrading Small Business Client and Server Infrastructure E-LEET Solutions. E-LEET Solutions is an information technology consulting firm

Upgrading Small Business Client and Server Infrastructure E-LEET Solutions. E-LEET Solutions is an information technology consulting firm Thank you for considering E-LEET Solutions! E-LEET Solutions is an information technology consulting firm that specializes in low-cost high-performance computing solutions. This document was written as

More information

Attix5 Pro Server Edition

Attix5 Pro Server Edition Attix5 Pro Server Edition V7.0.2 User Manual for Mac OS X Your guide to protecting data with Attix5 Pro Server Edition. Copyright notice and proprietary information All rights reserved. Attix5, 2013 Trademarks

More information

SOS Suite Installation Guide

SOS Suite Installation Guide SOS Suite Installation Guide rev. 8/31/2010 Contents Overview Upgrading from SOS 2009 and Older Pre-Installation Recommendations Network Installations System Requirements Preparing for Installation Installing

More information

Configuring Backup Settings. Copyright 2009, Oracle. All rights reserved.

Configuring Backup Settings. Copyright 2009, Oracle. All rights reserved. Configuring Backup Settings Objectives After completing this lesson, you should be able to: Use Enterprise Manager to configure backup settings Enable control file autobackup Configure backup destinations

More information

Data Backup Options for SME s

Data Backup Options for SME s Data Backup Options for SME s As an IT Solutions company, Alchemy are often asked what is the best backup solution? The answer has changed over the years and depends a lot on your situation. We recognize

More information

Sensor Monitoring and Remote Technologies 9 Voyager St, Linbro Park, Johannesburg Tel: +27 11 608 4270 ; www.batessecuredata.co.

Sensor Monitoring and Remote Technologies 9 Voyager St, Linbro Park, Johannesburg Tel: +27 11 608 4270 ; www.batessecuredata.co. Sensor Monitoring and Remote Technologies 9 Voyager St, Linbro Park, Johannesburg Tel: +27 11 608 4270 ; www.batessecuredata.co.za 1 Environment Monitoring in computer rooms, data centres and other facilities

More information

Hardware Configuration Guide

Hardware Configuration Guide Hardware Configuration Guide Contents Contents... 1 Annotation... 1 Factors to consider... 2 Machine Count... 2 Data Size... 2 Data Size Total... 2 Daily Backup Data Size... 2 Unique Data Percentage...

More information

Single Product Review - Bitdefender Security for Virtualized Environments - November 2012

Single Product Review - Bitdefender Security for Virtualized Environments - November 2012 Single Product Review Bitdefender Security for Virtualized Environments Language: English November 2012 Last Revision: 1 st December 2012 Review commissioned by Bitdefender - 1 - Bitdefender Security for

More information

DAZZLE INTEGRATED DATA BACKUP FEATURE.

DAZZLE INTEGRATED DATA BACKUP FEATURE. DAZZLE INTEGRATED DATA BACKUP FEATURE. To simplify the backup process and to make sure even the busiest (or laziest) shops have no excuse not to make data backups, we have created a simple on-screen backup

More information

User Guide. Telekom Malaysia Berhad (128740-P) www.tm.com.my/sme Call 1-800-888-SME (763) Visit TMpoint/TM Authorised Resellers

User Guide. Telekom Malaysia Berhad (128740-P) www.tm.com.my/sme Call 1-800-888-SME (763) Visit TMpoint/TM Authorised Resellers User Guide Telekom Malaysia Berhad (128740-P) www.tm.com.my/sme Call 1-800-888-SME (763) Visit TMpoint/TM Authorised Resellers 1 2 Office in a Box Congratulations on making the right decision for your

More information

Online Backup Client User Manual

Online Backup Client User Manual Online Backup Client User Manual Software version 3.21 For Linux distributions January 2011 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have

More information

Overview. Timeline Cloud Features and Technology

Overview. Timeline Cloud Features and Technology Overview Timeline Cloud is a backup software that creates continuous real time backups of your system and data to provide your company with a scalable, reliable and secure backup solution. Storage servers

More information

Junos Pulse for Google Android

Junos Pulse for Google Android Junos Pulse for Google Android User Guide Release 4.0 October 2012 R1 Copyright 2012, Juniper Networks, Inc. Juniper Networks, Junos, Steel-Belted Radius, NetScreen, and ScreenOS are registered trademarks

More information

Comparison of different image compression formats. ECE 533 Project Report Paula Aguilera

Comparison of different image compression formats. ECE 533 Project Report Paula Aguilera Comparison of different image compression formats ECE 533 Project Report Paula Aguilera Introduction: Images are very important documents nowadays; to work with them in some applications they need to be

More information

Empress Embedded Database. for. Medical Systems

Empress Embedded Database. for. Medical Systems Empress Embedded Database for Medical Systems www.empress.com Empress Software Phone: 301-220-1919 1. Introduction From patient primary care information system to medical imaging system to life-critical

More information

Service Overview CloudCare Online Backup

Service Overview CloudCare Online Backup Service Overview CloudCare Online Backup CloudCare s Online Backup service is a secure, fully automated set and forget solution, powered by Attix5, and is ideal for organisations with limited in-house

More information

NOVA COLLEGE-WIDE COURSE CONTENT SUMMARY ITE 115 - INTRODUCTION TO COMPUTER APPLICATIONS & CONCEPTS (3 CR.)

NOVA COLLEGE-WIDE COURSE CONTENT SUMMARY ITE 115 - INTRODUCTION TO COMPUTER APPLICATIONS & CONCEPTS (3 CR.) Revised 5/2010 NOVA COLLEGE-WIDE COURSE CONTENT SUMMARY ITE 115 - INTRODUCTION TO COMPUTER APPLICATIONS & CONCEPTS (3 CR.) Course Description Covers computer concepts and Internet skills and uses a software

More information

ROM ACCESS CONTROL USER S MANUAL

ROM ACCESS CONTROL USER S MANUAL ROM ACCESS CONTROL USER S MANUAL Manual Software Pro-Access Page: 1 PRO-ACCESS SOFTWARE GUIDE PRO-ACCESS SOFTWARE GUIDE 1 0. INTRODUCTION 3 1. INSTALLIG THE SOFTWARE 4 2. SOFTWARE OPERATORS AND COMPETENCIES.

More information

Workflow Templates Library

Workflow Templates Library Workflow s Library Table of Contents Intro... 2 Active Directory... 3 Application... 5 Cisco... 7 Database... 8 Excel Automation... 9 Files and Folders... 10 FTP Tasks... 13 Incident Management... 14 Security

More information

Server & Workstation Installation of Client Profiles for Windows

Server & Workstation Installation of Client Profiles for Windows C ase Manag e m e n t by C l i e n t P rofiles Server & Workstation Installation of Client Profiles for Windows T E C H N O L O G Y F O R T H E B U S I N E S S O F L A W General Notes to Prepare for Installing

More information

WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression

WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression Sponsored by: Oracle Steven Scully May 2010 Benjamin Woo IDC OPINION Global Headquarters: 5 Speen Street Framingham, MA

More information

OPTAC Fleet Viewer. Instruction Manual

OPTAC Fleet Viewer. Instruction Manual OPTAC Fleet Viewer Instruction Manual Stoneridge Limited Claverhouse Industrial Park Dundee DD4 9UB Help-line Telephone Number: 0870 887 9256 E-Mail: [email protected] Document version 4.0 Part Number:

More information

Optum Patient Portal. 70 Royal Little Drive. Providence, RI 02904. Copyright 2002-2013 Optum. All rights reserved. Updated: 3/7/13

Optum Patient Portal. 70 Royal Little Drive. Providence, RI 02904. Copyright 2002-2013 Optum. All rights reserved. Updated: 3/7/13 Optum Patient Portal 70 Royal Little Drive Providence, RI 02904 Copyright 2002-2013 Optum. All rights reserved. Updated: 3/7/13 Table of Contents 1 Patient Portal Activation...1 1.1 Pre-register a Patient...1

More information

Exchange Mailbox Protection Whitepaper

Exchange Mailbox Protection Whitepaper Exchange Mailbox Protection Contents 1. Introduction... 2 Documentation... 2 Licensing... 2 Exchange add-on comparison... 2 Advantages and disadvantages of the different PST formats... 3 2. How Exchange

More information

SharePoint Performance Optimization

SharePoint Performance Optimization White Paper AX Series SharePoint Performance Optimization September 2011 WP_SharePoint_091511.1 TABLE OF CONTENTS 1 Introduction... 2 2 Executive Overview... 2 3 SSL Offload... 4 4 Connection Reuse...

More information

IMCM: A Flexible Fine-Grained Adaptive Framework for Parallel Mobile Hybrid Cloud Applications

IMCM: A Flexible Fine-Grained Adaptive Framework for Parallel Mobile Hybrid Cloud Applications Open System Laboratory of University of Illinois at Urbana Champaign presents: Outline: IMCM: A Flexible Fine-Grained Adaptive Framework for Parallel Mobile Hybrid Cloud Applications A Fine-Grained Adaptive

More information

Tandberg Data AccuVault RDX

Tandberg Data AccuVault RDX Tandberg Data AccuVault RDX Binary Testing conducts an independent evaluation and performance test of Tandberg Data s latest small business backup appliance. Data backup is essential to their survival

More information

Maximizing Hadoop Performance with Hardware Compression

Maximizing Hadoop Performance with Hardware Compression Maximizing Hadoop Performance with Hardware Compression Robert Reiner Director of Marketing Compression and Security Exar Corporation November 2012 1 What is Big? sets whose size is beyond the ability

More information

PIONEER RESEARCH & DEVELOPMENT GROUP

PIONEER RESEARCH & DEVELOPMENT GROUP SURVEY ON RAID Aishwarya Airen 1, Aarsh Pandit 2, Anshul Sogani 3 1,2,3 A.I.T.R, Indore. Abstract RAID stands for Redundant Array of Independent Disk that is a concept which provides an efficient way for

More information

Pcounter Web Report 3.x Installation Guide - v2014-11-30. Pcounter Web Report Installation Guide Version 3.4

Pcounter Web Report 3.x Installation Guide - v2014-11-30. Pcounter Web Report Installation Guide Version 3.4 Pcounter Web Report 3.x Installation Guide - v2014-11-30 Pcounter Web Report Installation Guide Version 3.4 Table of Contents Table of Contents... 2 Installation Overview... 3 Installation Prerequisites

More information

FileMaker Pro and Microsoft Office Integration

FileMaker Pro and Microsoft Office Integration FileMaker Pro and Microsoft Office Integration page Table of Contents Executive Summary...3 Introduction...3 Top Reasons to Read This Guide...3 Before You Get Started...4 Downloading the FileMaker Trial

More information

Offloading file search operation for performance improvement of smart phones

Offloading file search operation for performance improvement of smart phones Offloading file search operation for performance improvement of smart phones Ashutosh Jain [email protected] Vigya Sharma [email protected] Shehbaz Jaffer [email protected] Kolin Paul

More information

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB Planet Size Data!? Gartner s 10 key IT trends for 2012 unstructured data will grow some 80% over the course of the next

More information

Technical White Paper BlackBerry Enterprise Server

Technical White Paper BlackBerry Enterprise Server Technical White Paper BlackBerry Enterprise Server BlackBerry Enterprise Edition for Microsoft Exchange For GPRS Networks Research In Motion 1999-2001, Research In Motion Limited. All Rights Reserved Table

More information

Web Analytics Understand your web visitors without web logs or page tags and keep all your data inside your firewall.

Web Analytics Understand your web visitors without web logs or page tags and keep all your data inside your firewall. Web Analytics Understand your web visitors without web logs or page tags and keep all your data inside your firewall. 5401 Butler Street, Suite 200 Pittsburgh, PA 15201 +1 (412) 408 3167 www.metronomelabs.com

More information

Reference Guide WindSpring Data Management Technology (DMT) Solving Today s Storage Optimization Challenges

Reference Guide WindSpring Data Management Technology (DMT) Solving Today s Storage Optimization Challenges Reference Guide WindSpring Data Management Technology (DMT) Solving Today s Storage Optimization Challenges September 2011 Table of Contents The Enterprise and Mobile Storage Landscapes... 3 Increased

More information

Stellar Phoenix. SQL Database Repair 6.0. Installation Guide

Stellar Phoenix. SQL Database Repair 6.0. Installation Guide Stellar Phoenix SQL Database Repair 6.0 Installation Guide Overview Stellar Phoenix SQL Database Repair software is an easy to use application designed to repair corrupt or damaged Microsoft SQL Server

More information

How To Use Attix5 Pro For A Fraction Of The Cost Of A Backup

How To Use Attix5 Pro For A Fraction Of The Cost Of A Backup Service Overview Business Cloud Backup Techgate s Business Cloud Backup service is a secure, fully automated set and forget solution, powered by Attix5, and is ideal for organisations with limited in-house

More information

An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide An Oracle White Paper July 2011 1 Disclaimer The following is intended to outline our general product direction.

More information

Quick Start Guide. www.uptrendsinfra.com

Quick Start Guide. www.uptrendsinfra.com Quick Start Guide Uptrends Infra is a cloud service that monitors your on-premise hardware and software infrastructure. This Quick Start Guide contains the instructions to get you up to speed with your

More information

RecoveryVault Express Client User Manual

RecoveryVault Express Client User Manual For Linux distributions Software version 4.1.7 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have been introduced caused by human mistakes or by

More information

Installation and Operation Manual Portable Device Manager, Windows version

Installation and Operation Manual Portable Device Manager, Windows version Installation and Operation Manual version version About this document This document is intended as a guide for installation, maintenance and troubleshooting of Portable Device Manager (PDM) and is relevant

More information

redcoal EmailSMS for MS Outlook and Lotus Notes

redcoal EmailSMS for MS Outlook and Lotus Notes redcoal EmailSMS for MS Outlook and Lotus Notes Technical Support: [email protected] Or visit http://www.redcoal.com/ All Documents prepared or furnished by redcoal Pty Ltd remains the property of redcoal

More information

Project Proposal. Data Storage / Retrieval with Access Control, Security and Pre-Fetching

Project Proposal. Data Storage / Retrieval with Access Control, Security and Pre-Fetching 1 Project Proposal Data Storage / Retrieval with Access Control, Security and Pre- Presented By: Shashank Newadkar Aditya Dev Sarvesh Sharma Advisor: Prof. Ming-Hwa Wang COEN 241 - Cloud Computing Page

More information

Super Manager User Manual. English v1.0.3 2011/06/15 Copyright by GPC Http://gpc.myweb.hinet.net

Super Manager User Manual. English v1.0.3 2011/06/15 Copyright by GPC Http://gpc.myweb.hinet.net Super Manager User Manual English v1.0.3 2011/06/15 Copyright by GPC Http://gpc.myweb.hinet.net How to launch Super Manager? Click the Super Manager in Launcher or add a widget into your Launcher (Home

More information

WHAT'S NEW WITH SALESFORCE FOR OUTLOOK

WHAT'S NEW WITH SALESFORCE FOR OUTLOOK WHAT'S NEW WITH SALESFORCE FOR OUTLOOK Salesforce for Outlook v2.8.1 Salesforce for Outlook v2.8.1, we ve improved syncing and fixed issues with the side panel and error log. Sync Side Panel Error Log

More information

Enterprise Remote Control 5.6 Manual

Enterprise Remote Control 5.6 Manual Enterprise Remote Control 5.6 Manual Solutions for Network Administrators Copyright 2015, IntelliAdmin, LLC Revision 3/26/2015 http://www.intelliadmin.com Page 1 Table of Contents What is Enterprise Remote

More information

Introducing Graves IT Solutions Online Backup System

Introducing Graves IT Solutions Online Backup System Introducing Graves IT Solutions Online Backup System Graves IT Solutions is proud to announce an exciting new Online Backup System designed to protect your data by placing it online into the cloud. Graves

More information

1. Product Information

1. Product Information ORIXCLOUD BACKUP CLIENT USER MANUAL LINUX 1. Product Information Product: Orixcloud Backup Client for Linux Version: 4.1.7 1.1 System Requirements Linux (RedHat, SuSE, Debian and Debian based systems such

More information

Streaming Lossless Data Compression Algorithm (SLDC)

Streaming Lossless Data Compression Algorithm (SLDC) Standard ECMA-321 June 2001 Standardizing Information and Communication Systems Streaming Lossless Data Compression Algorithm (SLDC) Phone: +41 22 849.60.00 - Fax: +41 22 849.60.01 - URL: http://www.ecma.ch

More information

Chapter 5. Regression Testing of Web-Components

Chapter 5. Regression Testing of Web-Components Chapter 5 Regression Testing of Web-Components With emergence of services and information over the internet and intranet, Web sites have become complex. Web components and their underlying parts are evolving

More information

REMOTE BACKUP-WHY SO VITAL?

REMOTE BACKUP-WHY SO VITAL? REMOTE BACKUP-WHY SO VITAL? Any time your company s data or applications become unavailable due to system failure or other disaster, this can quickly translate into lost revenue for your business. Remote

More information

Peter Mileff PhD SOFTWARE ENGINEERING. The Basics of Software Engineering. University of Miskolc Department of Information Technology

Peter Mileff PhD SOFTWARE ENGINEERING. The Basics of Software Engineering. University of Miskolc Department of Information Technology Peter Mileff PhD SOFTWARE ENGINEERING The Basics of Software Engineering University of Miskolc Department of Information Technology Introduction Péter Mileff - Department of Information Engineering Room

More information

Talk With Someone Live Now: (760) 650-2313. One Stop Data & Networking Solutions PREVENT DATA LOSS WITH REMOTE ONLINE BACKUP SERVICE

Talk With Someone Live Now: (760) 650-2313. One Stop Data & Networking Solutions PREVENT DATA LOSS WITH REMOTE ONLINE BACKUP SERVICE One Stop Data & Networking Solutions PREVENT DATA LOSS WITH REMOTE ONLINE BACKUP SERVICE Prevent Data Loss with Remote Online Backup Service The U.S. National Archives & Records Administration states that

More information

How SafeVelocity Improves Network Transfer of Files

How SafeVelocity Improves Network Transfer of Files How SafeVelocity Improves Network Transfer of Files 1. Introduction... 1 2. Common Methods for Network Transfer of Files...2 3. Need for an Improved Network Transfer Solution... 2 4. SafeVelocity The Optimum

More information

ReadyNAS Replicate. Software Reference Manual. 350 East Plumeria Drive San Jose, CA 95134 USA. November 2010 202-10727-01 v1.0

ReadyNAS Replicate. Software Reference Manual. 350 East Plumeria Drive San Jose, CA 95134 USA. November 2010 202-10727-01 v1.0 ReadyNAS Replicate Software Reference Manual 350 East Plumeria Drive San Jose, CA 95134 USA November 2010 202-10727-01 v1.0 2010 NETGEAR, Inc. All rights reserved. No part of this publication may be reproduced,

More information

Asta Powerproject Enterprise

Asta Powerproject Enterprise Asta Powerproject Enterprise Overview and System Requirements Guide Asta Development plc Kingston House Goodsons Mews Wellington Street Thame Oxfordshire OX9 3BX United Kingdom Tel: +44 (0)1844 261700

More information

File Management Windows

File Management Windows File Management Windows : Explorer Navigating the Windows File Structure 1. The Windows Explorer can be opened from the Start Button, Programs menu and clicking on the Windows Explorer application OR by

More information

Online Backup Client User Manual Linux

Online Backup Client User Manual Linux Online Backup Client User Manual Linux 1. Product Information Product: Online Backup Client for Linux Version: 4.1.7 1.1 System Requirements Operating System Linux (RedHat, SuSE, Debian and Debian based

More information

Peer-to-peer Cooperative Backup System

Peer-to-peer Cooperative Backup System Peer-to-peer Cooperative Backup System Sameh Elnikety Mark Lillibridge Mike Burrows Rice University Compaq SRC Microsoft Research Abstract This paper presents the design and implementation of a novel backup

More information

Mobile@Connector for Salesforce.com

Mobile@Connector for Salesforce.com Mobile@Connector for Salesforce.com Provided by: Logotec Engineering User s Manual Version 1.1.1 Table of Contents General information... 3 Overview... 3 Supported devices... 3 Price... 3 Salesforce.com

More information

IMPORTANT Please Read Me First

IMPORTANT Please Read Me First IMPORTANT Please Read Me First 3/02/2006 Table of Contents Table of Contents Part 1 Mac Single User Installation 1 Part 2 Windows Single User Installation 2 Part 3 Mac Server Installation 3 Part 4 Windows

More information

Online Backup Linux Client User Manual

Online Backup Linux Client User Manual Online Backup Linux Client User Manual Software version 4.0.x For Linux distributions August 2011 Version 1.0 Disclaimer This document is compiled with the greatest possible care. However, errors might

More information

Rapid Assessment Key User Manual

Rapid Assessment Key User Manual Rapid Assessment Key User Manual Table of Contents Getting Started with the Rapid Assessment Key... 1 Welcome to the Print Audit Rapid Assessment Key...1 System Requirements...1 Network Requirements...1

More information

Hardware RAID vs. Software RAID: Which Implementation is Best for my Application?

Hardware RAID vs. Software RAID: Which Implementation is Best for my Application? STORAGE SOLUTIONS WHITE PAPER Hardware vs. Software : Which Implementation is Best for my Application? Contents Introduction...1 What is?...1 Software...1 Software Implementations...1 Hardware...2 Hardware

More information

TABLE OF CONTENTS. Legend:

TABLE OF CONTENTS. Legend: user guide Android Ed. 1.1 TABLE OF CONTENTS 1 INTRODUCTION... 4 1.1 Indicators on the top tool bar... 5 1.2 First control bar... 7 1.3 Second control bar... 7 1.4 Description of the icons in the main

More information

Nortel Networks Call Center Reporting Set Up and Operation Guide

Nortel Networks Call Center Reporting Set Up and Operation Guide Nortel Networks Call Center Reporting Set Up and Operation Guide www.nortelnetworks.com 2001 Nortel Networks P0919439 Issue 07 (24) Table of contents How to use this guide... 5 Introduction...5 How this

More information

Online Backup Client User Manual

Online Backup Client User Manual For Linux distributions Software version 4.1.7 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have been introduced caused by human mistakes or by

More information

SMALL INDEX LARGE INDEX (SILT)

SMALL INDEX LARGE INDEX (SILT) Wayne State University ECE 7650: Scalable and Secure Internet Services and Architecture SMALL INDEX LARGE INDEX (SILT) A Memory Efficient High Performance Key Value Store QA REPORT Instructor: Dr. Song

More information

FileMaker Server 7. Administrator s Guide. For Windows and Mac OS

FileMaker Server 7. Administrator s Guide. For Windows and Mac OS FileMaker Server 7 Administrator s Guide For Windows and Mac OS 1994-2004, FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker is a trademark

More information