DELONE&MCLEAN SUCCESS MODEL AS A DESCRIPTIVE TOOL IN EVALUATING A VIRTUAL LEARNING ENVIRONMENT Key words: DeLone & McLean IS success model, Virtual learning environment, Descriptive approach Raija Halonen1,2, Tom Acton3, William Golden3, Kieran Conboy3 1Centre for Innovation & Structural Change, National University of Ireland, Galway 2Department of Information Processing Science, University of Oulu, Finland 3J.E. Cairnes School of Business & Economics, National University of Ireland, Galway, Ireland ABSTRACT This paper contributes the discussion about applying DeLone & McLean success model. Since its introduction in 1992, the model has been widely used and evaluated. In this paper the model is used as a descriptive tool in evaluating a virtual learning environment. The study relies on prior research and does not question the measures. Instead, the model contributes the evaluation and describes the virtual environment. Contrary to earlier use of the success model in virtual learning systems, this study focuses on degree-level studies. The paper shows that the model can be used as a descriptive tool because the six dimensions offer possibilities to explore and describe the environment from several approaches. 1. INTRODUCTION This paper adds to the discussion of the use of the IS success model (D&M) originally developed by DeLone and McLean in 1992 and refined later in further work in 1992, 2003 and 2004 (DeLone & McLean, 1992, 2003, 2004). Earlier research on the D&M has extended the model to cover the context of e-learning at the level of single e- learning courses (see Holsapple & Lee-Post, 2006; Wang et al., 2007; Lin, 2007). With e-learning programmes encompassing a number of distinct courses and involving other delivery- and pedagogical aspects, thus far there is a void of research at the programme level, exploring the total learning services that support students’ aims to attain a degree. In an e-learning context our study focuses on the degree level rather than on a single course. The study uses the D&M (year 2003) to describe the success of the e-learning environment from the student perspective. Petter et al. (2008) present an overview of the use of the D&M since its initial development. They argue that researchers and practitioners still focus on single dimensions of IS success rather than a systemic view of the issue under study. In this paper we detail the applicability and use of the D&M (DeLone & McLean, 2003) when assessing the success of a virtual learning environment in facilitating and enabling e- learning for an entire degree. Building upon a call for research by Petter et al. (2008) this study systemically applies the D&M to an e-learning environment as a whole, rather than focusing on its IT artefacts. Further, we approach the study from the service 1 Raija Halonen, CISC, National University of Ireland, Galway, tel. +35391492847, fax +35391495524, raija.halonen@nuigalway.ie perspective, as the empirical material was gathered in a private organisation providing virtual learning as a service. In the main, electronic services such as the provision of e- learning are delivered and facilitated over the internet (Rust & Kannan, 2003). Apart from the study material and e-studies, the e-learning programme under study here also offers supervision and interaction. These resources can be viewed as a service that supports the process of studying. Additionally, our study aligns with a call for research by Holsapple and Lee-Post (2006) proposing research on the applicability of the success model to other areas of e-learning besides higher education settings. Our paper is constructed as follows. The concept of distance learning and its different approaches are introduced. We explain how the DeLone and McLean IS success model (1992; 2003) has been used and developed over time and in different settings. We highlight its use in e-learning environments. Later we introduce our empirical case and describe our application of the D&M. We present our results, and discuss our findings together with providing our conclusions from the research. 2. E-LEARNING AS A SERVICE In the early 1960's, distance learning courses began to be offered via television. However, over the following decades the evolution of communications technologies facilitated internet-based online e-learning (Schweizer, 2004.) A virtual leaning environment is a relatively open computer-based system enabling interaction between participants. The student uses study material independently, learns subject material in their own time via synchronous or asynchronous course delivery. Indeed virtual environments shift the learning process from individual learning experience to more collective experience, where students may communicate with other students and teachers (Piccoli et al., 2001.) Virtual learning environments can be classified in several dimensions. Piccoli et al. (2001) use time, place, space, technology, interaction, and control in their study on web- based virtual learning environments: students use the learning environment with the help of these dimensions. Piccoli et al. reported that the ability to comfortably share control of the learning activities with students was a prerequisite to satisfactory instructor adoption in the virtual learning environment. Therefore, the authors suggested future research on human dimensions that would verify or dispel their conception through empirical investigation. In the context of internet-based customer service systems Piccoli et al. (2004) emphasise the need to produce tailored services to different customer groups. They define a network-based customer service system as a network- based computerised IS that delivers service to a customer either directly or indirectly. Thus, the web site is the most visible channel between the service provider and the customer. In the e-learning environment, students are the customers of the service, with the e-learning programme as the e-service intended to enable students to achieve their degree qualification. As such, the service aspect encompasses the technologies that facilitate its delivery, but also include the interactions between participant actors in the system as well as the learning experience. Indeed Rust and Kannan (2003) discuss e-services wider than pure network-based computerised IS, describing e-services as including the service product, service environment and service delivery. Customer requirements are paramount. Therefore, e- services are often customised to meet customers’ needs. Surjadjaja et al. (2003) delineate e-service as web-based service as interactions between customers and service providers, which takes place partly or totally with the help of the Internet. Santos (2003) argues that service quality is a key determinant for a successful e-service due to easiness of comparing products on the Internet. Likewise, Zhang and Prybutok (2005) emphasized service quality and introduced a model to be used when exploring individual differences of the users, easiness of the e-service, quality of the web-service, risks, user friendliness and users’ intentions. Web-based e-learning was explored as a service by Chiu et al. (2005): they described how e-learning is enabled by several synchronous or asynchronous techniques. Synchronous e-learning service includes real-time interaction between students and teachers and it can be realised for example by video, meetings and chats. Accordingly, asynchronous e-learning service resembles self-education but it includes non-real-time interaction between students and teachers for example by emails and discussion environments. The IS literature is replete with studies on success, satisfaction, acceptance and system usage: indeed some of the central models in IS literature relate specifically to these topics. The Technology Acceptance Model (TAM) developed by Davis (1986) and refined by others explores the impact of perceptions towards a system on its acceptance. The DeLone and McLean model (1992) provides a framework for identifying and gauging the relative impact of various determinants and antecedents of system success, and has been applied widely since its inception. Considering the e-learning environment systemically and holistically, we discuss the D&M in light of its potential applicability in the study. 3. MEASURING SYSTEM SUCCESS The success model developed by DeLone and McLean provides a rubust indicator of the success of information systems (DeLone & McLean, 1992). Precursorially, in their seminal article Shannon and Weaver (1949, as cited in DeLone & McLean, 1992) described technical, semantic and effectiveness aspects to the evaluation of information systems. Later, Mason (1978) reformulated these concepts with a behavioural focus by emphasising the impact of IS on changes in user behaviour. Building upon the work of Shannon and Weawer (1949) and Mason (1978), DeLone and McLean (1992) yielded six distinct aspects of information systems success: ‘System Quality’, ‘Information Quality’, ‘Use’, ‘User Satisfaction’, ‘Individual Impact’ and ‘Organisational Impact’. DeLone and McLean (1992, 80-81) highlight four conclusions from their research: 1. The IS researcher has a broad list of individual dependent variables to choose from. 2. Significant reductions in the number of different dependent variable measures are needed so that research results can be compared. 3. There are too few MIS field study research attempts to measure the influence of the MIS effort on organisational performance. 4. MIS success is a multidimensional construct and it should be measured as such. Later, DeLone and McLean (2003) introduced an update to their IS success model. The main changes concerned quality, and service quality was included in the model. Indeed DeLone and McLean (2003, 23) note: “As discussed earlier, quality has three major dimensions: information quality, systems quality and service quality”. They also added ‘Intention to Use’ to the model. Finally, they removed ‘Individual Impact’ and ‘Organisational Impact’ and replaced them with ‘Net Benefits’; further, they added feedback loops to ‘Intention to Use’ and ‘User Satisfaction’ (see Fig. 1). Figure 1. Updated D&M IS success model (DeLone & McLean, 2003, 24). The D&M has been widely used to gauge success (for example, see Petter et al., 2008). Over time the model has been modified to meet the requirements set by several kinds of information systems, and from different points of view. Later, DeLone and McLean (2004) applied their success model to evaluate the success of e-commerce systems. From an e-commerce perspective, the key users are customers and providers. Holsapple and Lee-Post (2006) adapted the model for use in evaluating e-learning courses. A year later, Lin (2007) also modified the 2003 model to assess the use of online learning systems. Further, Wang et al. (2007) used the model when they assessed the efficiency and success of e-learning information systems from the viewpoint of organisations and their employees. Wang et al. produced measures that include arguments classified according to the D&M 2003 model. The arguments can be easily adapted case-by-case even if the arguments concerning ‘net benefits’ measure influences on ‘organisational output’. Common to these studies was a focus on the central stakeholders in using the system: for e-learning the central stakeholders are the learners. However, the IS aspect to e-learning systems is but one facet: at the programme level there is an underlying technology to the information system that facilitates and enables learning, communication, programme and course delivery, and other user-based activities. It follows then that at the programme level, evaluation of the success of an e- learning system must be systemic and multi-faceted. Holsapple and Lee-Post (2006) approached defining and assessing e-learning from an IS perspective. They developed a model that grounds on both D&M 1992 and D&M 2003. Holsapple and Lee-Post (2006) combined in the model the system development phases and introduced a modified success model. The measures in their model are mostly adjectives that describe the system and they are meant to be specified case-by-case. INFORMATION QUALITY USE SYSTEM QUALITY INTENTION TO USE SERVICE QUALITY USER SATISFACTION NET BENEFITS The developed model is divided in three phases explaining the IS development phases: ‘System design’, ‘System delivery’ and ‘System outcome’ (Fig. 2). In their case, the system was an online version of a course. Figure 2. The e-learning success model (Holsapple & Lee-Post, 2006, 71). The measures in the model are chosen according to the target to be evaluated or the object of interest. Thus, the measures change when the model is adapted to be used when evaluating another system. The arrows in the figure depict the dependence between the phases in the assessment. ‘System design’ is essential when considering the success of delivery and the influences of the outcome that the delivery brings. Likewise, ‘Use’ and ‘User Satisfaction’ are dependent upon each other. Wang et al. (2007) showed that the D&M is applicable and implementable in the context of virtual learning. They employed the model from the actor or learner perspective, and found that the model can be used when developing and testing virtual learning systems or when implementing new virtual learning systems. Recently, the success factors of an online learning system were studied by Lin et al. (2007) who defined an online learning system as an “interactive system” that offers several virtual functions to be used in teaching and also in improving quality of learning. Lin et al. implemented the D&M in the context of virtual learning, and reported how ‘System Quality’, ‘Information Quality’ and ‘Service Quality’ are influenced via ‘User Satisfaction’, ‘Intention to Use’ and actual ‘Use’. System delivery System outcome System design Net Benefits Positive Aspects enhanced learning empowered time savings academic success Negative Aspects lack of contact isolation quality concerns technology dependenceUser Satisfaction overall satisfaction enjoyable experience overall success recommended to others Use PowerPoint slides audio script discussion board case studies practice problems Excel tutorials assignments practice exam Information Quality well organised effectively presented of the right length clearly written useful up-to-date System Quality easy-to-use user friendly stable secure fast responsive Service Quality prompt responsive fair knowledgeable available 4. RESEARCH APPROACH This qualitative research uses the D&M to describe the success of the virtual learning system perceived by the students. The approach employed a case study that enabled us to interpret and understand the success of the environment (Walsham, 2006; van der Blonk, 2003). Analysis was interpretivist but involved a quantitative questionnaire. We report the empirical case in sufficient detail to give a thorough understanding of the e-learning service environment. Generally, in case studies the case can be a person, society, organisation, incident, series of incidents, process, physical unit or an occasion. The case must be lineated from its surroundings and the grounds for that must be explained (Yin, 2003.) In this study the case was a process that built up when a student used the service offered in a virtual learning environment. Commensurate with research methods employed elsewhere in implementing the D&M (DeLone & McLean, 2003), in this study empirical data were gathered using a questionnaire delivered via the web. The resultant questionnaire data helped us to illustrate the use of the environment. The questionnaire was structured with the help of the chosen framework that will be explained later in this paper. The questionnaire included questions about the respondents’ background. 29 closed questions were added with 3 open-end questions. The background information was used to ensure that the respondents met our target group. The closed questions were answered by Likert’s 5-step (strongly disagree, disagree, neither agree nor disagree, agree, strongly agree) measures. Content analysis was used to analyse the responses to the open questions. The questionnaire was addressed to students of basic or vocational examinations in computing or information systems. An essential requirement was that the virtual learning environment was used throughout the teaching. Four courses were still ongoing and one was ended before the questionnaire was available. Only students who had visited the virtual learning environment in the past 1.5 months were included. As such, the target group consisted of 64 students. A total of 25 persons (39%) responded to our call. However, 13 students represented long-term studies (Bachelor of Engineering) while only 6 replies (24%) were received from the students representing apprenticeship studies. 20 respondents were still students at the time of the study. 5. THE CASE: VIRTUAL LEARNING ENVIRONMENT AS A SERVICE The case organisation was a private institution that offered vocational adult schooling. In 2007 there were about 70 examinations that consisted of basic, vocational or specialised vocational training, short-term personnel education and enterprise-based training programmes. In 2007 the number of students in the institution was about 13000. Most of the students were adults who were employed and who studied in programmes that focused on occupational training. Typical teaching included contact hours, distance learning and learning by doing. A web-based learning platform had been in use in the organisation for ten years. Moodle was used as the virtual learning platform and its teaching contents varied a lot between courses. As is typical for many distance education programmes generally, some students had returned to education having been in the workforce for a number of years, and others were in continuing education. E-learning course work was asynchronous and could be accomplished in conjunction with existing work-based commitments, in other words, working daytime and studying evenings. The virtual learning environment was a service that offered study material, exercises, study modules, exams, information about studies and degrees along with supervision related to studies and giving evidence of expertise. In the virtual learning environment it was also possible to interact with other students and teachers. Both synchronous and asynchronous features were prevalent. The virtual learning environment was developed as a service that would support students with the examination process, in understanding the degree as a whole and finally, to achieve their degrees. 5.1 Evaluation model In our research we used both the model developed by Holsapple and Lee-Post (2006) and Wang et al. (2007). Following the principles of the D&M 1992, we wanted to use as many existing measures as possible. Using Holsapple and Lee-Post’s adaptation of the D&M we created an overview of the measures related to the virtual learning environment in the organisation. Then, using both models we constructed an adaptation of the model implementable for the study: the measures included in the applied model are presented in Figure 3. Figure 3. Implemented success model for evaluating the virtual learning environment In our framework we modified the measures as follows: ‘System Quality’ included ‘good availability’ from DeLone & McLean (2004) and other three measures from Holsapple & Lee-Post (2006). System Outcome System Delivery System Design System Quality good availability stabile easy-to-use user friendly Information Quality essential sufficient useful well organised clearly written up-to-date Service Quality (Interaction) available responsive fair understanding Use density timetable study material exercises guidelines to accomplishing degree User Satisfaction overall satisfaction enjoyable experience overall success Net Benefits Positive Aspects benefits to studies benefits to accomplishing degree Negative Aspects (from the replies) use of time self guidance ‘Information Quality’ included ‘essential’ from D&M 2003, ‘sufficient’ from Wang et al. (2007) and other four measures from Holsapple & Lee-Post (2006). ‘Service Quality’ included ‘understanding’ as our added measure and other three measures from Holsapple & Lee-Post (2006). ‘Use’ included measures specified to our case. ‘User Satisfaction’ included three measures from Holsapple & Lee-Post (2006). ‘Net Benefits’ included positive aspects that were specific to our case and negative aspects that were drawn from the responses. 5.2 Questionnaire The questionnaire was based on the model described in Figure 3. ‘System Quality’ measures were very similar to previous studies (DeLone & McLean, 2004; Holsapple & Lee-Post, 2006; Wang et al., 2007) and required only minor changes. ‘Information Quality’ measures were also similar, as were questions assessing ease of use and user friendliness. We added specific measures concerning the use of the web-based virtual learning environment so that we could evaluate technical support as a component of ‘System Quality’. Supportive actions concerning virtual learning belonged to ‘Service Quality’ that was in line with all previous adaptations of the D&M (DeLone & McLean, 2004; Holsapple & Lee-Post, 2006, Lin, 2007; Wang et al., 2007). As in D&M 2003, in measuring ‘Service Quality’, all kind of support offered to users was emphasised. In our study we adopted the measures to be used in virtual learning environment and focused on support given by the teachers. Measures in ‘Use’ were chosen case-by-case as indicated in Figure 3. The choices were based on the interest of the target organisation. ‘User Satisfaction’ was modified according to our case. In the framework (Fig. 3), ‘System Outcome’ consisted of ‘Net Benefits’ and ‘Negative Aspects’. Previous models did not provide ready measures for ‘Net Benefits’ as we measured the viewpoint of an individual student in the entire study programme instead of a single course or benefits to the organisation. We wanted to find out if the students benefited from using the virtual learning environment in their studies and in accomplishing their degrees. Finally, the chosen questions were compared with the questions by Lin (2007) that were analogical to the questions in the said models. For some chosen questions, Lin’s study only verified their necessity. We found no need for additional questions. We also used open questions that allowed the respondents to give additional information. The open questions were about the students’ wishes to get some additional issues into the virtual learning environment or about their experience of what had been most difficult to them. In addition, we asked what kind of support the students wanted to get for their studies and giving evidence of expertise with the virtual environment. Finally we added the background questions to be used when classifying the respondents in the study programmes and if they gave evidences of expertise or not. 6. ANALYSIS AND RESULTS The aim of the research was to evaluate the perceived success of the virtual learning environment from the student perspective. Next, we analyse the results with the help of a framework that was based on the D&M 2003 (DeLone & McLean, 2003). As the sample included 25 respondents, also the statistical analysis is descriptive. System Quality ‘Good availability’, ‘stability’, ‘easy-to-use’ and ‘user friendliness’ were used when measuring ‘System Quality’ (Fig. 3). From the perspective of service provision, the users evaluated Moodle and its usability. Except a few exceptions, the respondents considered the functionality very positive and immaculate. Half of respondents considered Moodle mostly easy to navigate in and four of them thought it very easy while five of the respondents announced that they had had some problems. Three students did not express positive or negative conceptions. The amount of support either by phone or on-line was perceived neutral and one student totally disagreed getting both phone support and on-line support. The great percentage of neutral responses may indicate that the respondents had not needed support at all. Responses to the open questions clarified that some students perceived it difficult to find the most up-to-date information or material from the contents. Moodle did not indicate updated content by special colours or signals and therefore it remained to the teachers to inform about any new material. Information Quality Because we wanted to evaluate ability to service, content and information in the virtual learning environment were significant and therefore majority of the questions concerned ‘Information Quality’. Four respondents thought that they sometimes found it difficult to find information in the environment. 16 respondents agreed that information was well organised and that up-to-date information was easily found. Respondents used open questions when commenting this issue: “In the beginning it was difficult to figure out the structure of the course …” “… was difficult to find out what was to be found and where …” The material was organised so that it supported the structure of the examination. Therefore it was challenging to notice updated information. The students were asked if the teaching plans supported in understanding the purpose of the study blocks. 21 of them thought that they were useful while the rest disagreed. Sufficiency was inquired concerning both study material and exercises. The replies divided totally – material was perceived sufficient by 10 and insufficient by 10 students. Eleven of them thought that they had enough exercises and eight had too little of them. The study material was perceived rather explicit and only one respondent disagreed. In open questions the responses concerned need to get more study material and links to incidental study material. Students wished more patterns how to do exercises. Respondents also wanted more information about giving evidence of expertise and examples of them. From the research interest point of view it was interesting to find out how the students perceived information in relation to accomplishing their degrees. The students were asked if the instructions related to giving evidence of expertise guided them when they accomplish their degrees. One third totally agreed and one third agreed that the instructions supported them. The respondents also perceived that the virtual learning environment offered essential information. Only one student disagreed. The students mainly thought that they got the information they needed from the environment. Service Quality Service quality includes all support that is offered to its users (DeLone & McLean, 2003). In the model by Holsapple and Lee-Post (2006) and in our case ‘Service Quality’ is measured in the interaction between students and teachers. We measured availability, responsiveness, fairness and understanding (Fig. 3). The students’ opinions of guiding to use Moodle differed a lot. Eleven respondents were satisfied and ten of them did not say if they were satisfied or not. Four students would have needed more instruction in using Moodle. Most respondents were satisfied with the support and instruction they received in Moodle. Only one student disagreed. The students were mostly satisfied with interaction between students and teachers. However, seven respondents did not tell if they were satisfied or not and two students disagreed. In open questions the opinions concerning interaction were positive: “Support is available via the Web if you need it. Either from teachers of peer students. This functions beautifully!” We wanted to find out if the environment acted as a supportive service. The students were asked what kind of support they would like to get with the Web. The respondents clearly pointed out a need of support in giving evidence of expertise: “ some patterns of models of the evidence, what kind they should be ” “… virtual evidence files. Likewise there could be some extracts from the real evidence occasion as dialogs …” The wishes did not concern need to get instructions from teachers and to increase interaction but to get concrete examples of giving evidence of expertise into the environment. The students thought that they could understand better with the examples how they could show their own expertise. Use / Intention to Use The measures were chosen according to our case. The measured described density of use, timetables, study material, exercises and instructions to accomplish degrees (Fig. 3). We did not look at future intention to use. The students were asked how often they visited the virtual learning environment. Most of them used the environment several times a week. One respondent used it once a month and one respondent used it less. The students were asked if they mainly looked at the study schedule and eight of them used it mainly for other purposes. 19 students informed that they studied weekly with the help of study material and exercises offered in the environment. Two respondents used the environment for their studies fewer than monthly. User Satisfaction In our research ‘User Satisfaction’ was seen as students’ opinions on using the virtual learning environment. The measures were in general satisfied with the environment; they had an enjoyable experience when using it and in overall it was a success (Fig. 3). The students expressed a very positive attitude against the virtual learning environment. The respondents were very pleased that such an environment existed and there were no negative comments at all. Two respondents did not tell if they felt the environment useful or not. The most neutral comments were given when the students were asked about their experiences on getting useful information concerning their expertise from the evaluators. Only three respondents had received useful information from the teachers and the rest did not agree or disagree. Net Benefits In our study we looked at benefits related to studies and giving evidence of expertise gained from using the virtual learning environment (see Fig. 3). However, there may be also negative influences found in using virtual learning environment or in web-based learning (Holsapple & Lee-Post, 2006). When measuring ‘Net Benefits’, we could use the same measure on benefits of contents and use of the virtual learning environment. All except one respondent found the contents and use of the environment beneficial. However, opinions on benefits in combining the information with work differed. Eleven students did not agree or disagree while twelve of them had found positive influence. The most interesting argument from the research’s point of view was “The virtual learning environment supports me in accomplishing the degree”. Eight respondents agreed strongly and 13 agreed. Even if the respondents perceived that they got support in their degree, in open questions they wished more support related to giving evidence of expertise. That was reported earlier in the section “Information Quality”. The virtual learning environment was experienced useful both in planning and giving evidence of expertise and overall, in accomplishing degrees. The positive aspects consisted of ‘benefits to studies’ and ‘benefits to accomplishing degree’. However, some negative aspects arose in the open questions. Employed students tend to find it difficult to combine diversified teaching, work and family life. Diversified teaching necessitated that the student is independent, is able to guide himself and is able to plan his timetable. The question concerning the most difficult issue was answered: “The most difficult has been the problems with using time …” “… to find self-discipline in these studies ..” The virtual learning environment in studies enabled studies regardless of time and place but at the same time the student was required to study between the contact days. Therefore we could use measures as time consuming and self guidance that could be added to the success model used in our research, too. Relationships between measures DeLone and McLean (1992, 2003) emphasise that there are relations between the measures. The relationships should be tested but DeLone and McLean do not introduce the desired testing method. In quantitative research the relationships could be tested with correlations but in our qualitative research that was not possible. In our research we analysed the relationships by interpreting the received responses. This was reasonable because our material consisted of 25 fulfilled responses. The interpretation was done by looking first at the relationship of ‘Information Quality’, ‘System Quality’ and ‘Service Quality’ with ‘Use’ and ‘Use Satisfaction’. After that, ‘Use’ and ‘Use Satisfaction’ was examined in relation with ‘Net Benefits’. Finally we examined relationship between ‘Use’ and ‘Use Satisfaction’. The relationships were searched by comparing given responses with each other and by scrutinising if positive values given in one measure led to positive values in the other measure. First, measures of quality were explored in relation to measures on use with the help of ‘Information Quality’ vs. ‘Use’, ‘System Quality’ vs. ‘Use’ and ‘Service Quality’ vs. ‘Use’. According to the replies, a student who was satisfied with ‘Information Quality’, ‘System Quality’ or ‘Service Quality’ was a student who used the virtual learning environment a lot and in a versatile way. Instead, if a student was dissatisfied with any issues in ‘Information Quality’, ‘System Quality’ or ‘Service Quality’, it did not necessarily decrease the use of the virtual learning environment or the overall satisfaction. Second, measures on ‘Use’ and ‘Net Benefits’ were examined. Students who often used the service and used it versatilely told that they got support in their studies and in accomplishing degrees. When comparing measures ‘User Satisfaction’ and ‘Net Benefits’ the students who agreed or strongly agreed with overall user satisfaction also told that the virtual learning environment has benefited them in their studies. Finally, measures ‘Use’ and ‘User Satisfaction’ were analysed. Students who used the virtual learning environment frequently were extremely satisfied. Some respondents who used the service less appeared unsatisfied. This interpretation shows a conception of the measures used in our study. As there were no negative values in some questions it was impossible to interpret a situation where a student would be extremely unsatisfied with the service. As a whole, the interpretation on relationships between the used measures is directional. 7. DISCUSSION In our study we were looking for answers to the question of how computing students perceive a virtual learning environment supporting in accomplishing degrees. We used the IS success model developed by DeLone and McLean (2003) and models derived from it. The virtual learning environment with its contents and possibilities to learn and supervise were defined as a service that was offered to the students. The teachers in the organisation had aimed to develop the virtual learning environment in such a way that it would support students when they accomplished their degrees. In this sense, the environment was highly customised (see Rust & Kannan, 2003). The virtual learning environment consisted of Moodle as the platform, study material, exercises, exams, web-based study modules, discussions and instructions connected with studies and accomplishing degrees. The structure in the environment was to enable the student to access information which was needed throughout the environment. In addition to studies, there were other instructions and information in the environment that were to help the student to assimilate issues related with accomplishing the degree. System quality has a significant influence on use and user satisfaction (DeLone & McLean, 2003; Holsapple & Lee-Post, 2006). In our research the system was Moodle platform and when evaluating ‘System Quality’ we considered Moodle’s functionality and the technical support that was connected with its use. Students perceived that the virtual learning environment operated almost without reproaches and we interpret that it describes the stability and good availability of the system. Five respondents replied that they had had problems in navigating between the functions in Moodle. The replies did not indicate if the problems appeared in the beginning of the studies or even later. Most of the respondents did not agree or disagree when asking if there was enough on-line support or phone support concerning the platform. It seems that the respondents did not need the support. Information quality, too, has a significant impact on use and user satisfaction (DeLone & McLean, 2003; Holsapple & Lee-Post, 2006; Lin, 2007). Hence, information is an important factor in the virtual learning environment. The respondents were mainly satisfied with the organised information. Some students had experienced difficulties to find information in the beginning of their studies but along with the increased use, information was found easier. The replies did not indicate if the organisation of information helped the students perceive the structure of the degree. Replies concerning ‘Information Quality’ highlighted three issues on ‘Service Quality’. The students perceived that the plans of study blocks helped them understand the purpose of their studies. Another important information concerned students’ experiment on receiving essential and needed information for their degree from the virtual learning environment. The third significant success factor was the instructions on giving evidence of expertise. Two thirds of the respondents perceived that the instructions supported them in accomplishing the degrees. We assume that those who did not agree nor disagree were in the beginning of their studies. Service quality builds on all support that is offered to its users (DeLone & McLean, 2003). In our study we measured ‘Service Quality’ by evaluating interaction between the students and teachers (see Holsapple & Lee-Post, 2006). When asked, the students replied that they were mostly satisfied with interaction. The students had received support and guidance and their questions were answered. These results tell us that the respondents were satisfied with given guidance. Service quality is extremely important because due to bad service customers may be lost (DeLone & McLean, 2003). From the e-learning approach we could interpret that weak interaction in the virtual learning environment could lead to reluctance to study. Our measures showed that ‘service quality’ was good. However, the success could be improved by utilising different models of interaction. The replies did not indicate what the final impact of interaction between students and teachers was on accomplishing degrees. The measures in the D&M 2003 can be adapted according to the case (DeLone & McLean, 2003). Use was scrutinised with measures that measured density of use and objects of use (Fig. 3). We did not explore ‘Intention to Use’ in our study. According to the replies, the students used the virtual learning environment weekly, most of them even several times per week. 19 students told that they used the web-material weekly and more than 20 of them used the environment when they planned to give evidence of expertise which was significant information for the organisation. User satisfaction was inquired by asking students’ opinions about using the virtual learning environment. The replies told about positive attitude against web-based learning and use of the virtual learning environment. The students were extremely satisfied on the possibility to use the environment in their studies. That finding was contrary to the results of Piccoli et al. (2001) who reported that students showed lower satisfaction with the virtual learning environment compared to the traditional one. Net benefits in web-learning are positive consequences (Holsapple & Lee-Post, 2006) and in our research they were positive consequences for studies and evidences of experience. The most important output was that the students perceived to benefit from the virtual learning environment when they accomplished their degrees. The biggest challenge to give evidence of expertise was to recognise the proper tasks in one’s own work to be used as part of the degree. Therefore the need to get more guidance was not a surprise to the organisation. Even if the use of the environment was perceived useful in accomplishing the degree, the information found in the environment was not always seen useful when the students planned how they could give evidence of expertise. ‘Net Benefits’ indicated that the virtual learning environment supports students when they accomplish their degrees. 8. CONCLUSIONS Our research shows that the D&M 2003 (DeLone & McLean, 2003) can be used as a descriptive tool when evaluating a virtual learning environment. The six dimensions offer possibilities to explore and describe the environment from several approaches. Holsapple and Lee-Post (2006) emphasise that the measures should be used according to the target system. We used the D&M 2003 model by adapting the previous modifications and then adding some measures of our own. In our descriptive case study we explored the relationships between the measures by interpreting the research material. With the help of the D&M 2003 we could state that the virtual learning environment had succeeded well to serve in accomplishing degrees. Five measures (‘System Quality’, ‘Service Quality’, ‘Use’, ‘User Satisfaction’ and ‘Net Benefits’) were interpreted positive. ‘Information Quality’ was perceived good but more material was desired into the environment. Our research enlarged the use of D&M 2003 by using it as a descriptive tool. We did not focus on single dimensions of IS success (see Petter et al., 2008). Instead, we used the model to describe the success of the virtual learning environment on accomplishing degrees. At the same time, the D&M 2003 model enabled to get a clear concept of the environment’s usability. However, we must question the results because of possible distortion or sidedness (Klein & Myers, 1999). The positive responses may arouse questions as if the right questions were asked at all. The questions seemed to be relevant at the time but they might need modifications to be used in the further research. Also, the respondents were all adult and it would be interesting to know how younger generation would respond to these questions and if there are differences between measures in different age groups. Also, even if our empirical material indicated – described with the help of the D&M 2003 – that the virtual learning environment succeeded in serving the adult students, it would be interesting to know what a larger group of students of different ages would think of it. Also, a teacher may use several pedagogic models and methods when teaching the students. However, the pedagogic approach was left out of the scope in our study. Furthermore, we did not discuss technological solutions in our article. That leaves space for further research. Acknowledgement M.Sc. Heli Thomander and the students are greatly acknowledged due to their efforts in this research. REFERENCES Baida, Z., Gordijn, J. and Omelayenko, B. (2004), ‘A shared service terminology for online service provisioning’, ACM, vol. 15 no. 1, pp. 1-10. Available at http://portal.acm.org/citation.cfm?id=1052222#. Blonk, van der H. (2003), ‘Writing case studies in information systems research’, Journal of Information Technology, vol. 18 no. 1, pp. 45-52. Chiu, C.-M., Hsu, M.-H., Sun, S.-Y., Lin, T.-C. and Sun, P.-C. (2005), ‘Usability, quality, value and e-learning continuance decisions’, Computers & Education, vol. 45 no. 4, pp. 399-416. Davis, F.D. (1986), A technology acceptance model for empirically testing new end- user information systems: theory and results. Massachusetts Institute of Technology, Sloan School of Management, Doctoral Thesis. Massachusetts Instutute of Technology, Massachusetts. DeLone, W.H. and McLean, E.R. (1992), ‘Information systems success: The quest for the dependent variable’, Information Systems Research, vol. 3 no. 1, pp. 60-95. DeLone, W.H. and McLean, E.R. (2003), ‘The DeLone and McLean model of information systems success: A ten-year update’, Journal of Management Information Systems, vol. 19 no. 4, pp. 9-30. DeLone, W.H. and McLean, E.R. (2004), ‘Measuring e-commerce success: applying the DeLone & McLean Information systems success mode’, International Journal of Electronic Commerce, vol. 9 no. 1, pp. 31-47. Eisenhardt, K.M. and Graebner, M.E. (2007), ‘Theory building from cases: Opportunities and challenges’, Academy of Management Journal, vol. 50 no. 1, pp. 25- 32. Holsapple, C.W. and Lee-Post, A. (2006), ‘Defining, assessing, and promoting e- learning success: An information systems perspective’, Decision Sciences Journal of Innovative Education, vol. 4 no. 1, pp. 67-85. Klein, H.K. and Myers, M.D. (1999), ‘A set of principles for conducting and evaluating interpretive field studies in information systems’, MIS Quarterly, vol. 23 no. 1, pp. 67- 94. Lin, H.-F. (2007), ‘Measuring online learning systems success: Applying the updated DeLone and McLean model’, CyberPsychology & Behavior, vol. 10 no. 6, pp. 817-820. Mason, R. (1978), ‘Measuring information output: A communication systems approach’, Information & Management, vol. 1 no. 5, pp. 219-234. Petter, S., DeLone, W. and McLean, E. (2008), ‘Measuring information systems success: models, dimensions, measures, and interrelationships’, European Journal of Information Systems, vol. 17, pp. 236-263. Piccoli, G., Ahmad, R. and Ives, B. (2001), ‘Web-based virtual learning environments: a research framework and a preliminary assessment of effectiveness in basic IT skills training’, MIS Quarterly, vol. 25 no. 4, pp. 401-426. Piccoli, G., Brohman, M.K., Watson, R.T., and Parasuraman, A. (2004), ‘Net-Based Customer Service Systems: Evolution and Revolution in Web Site Functionalities’, Decision Sciences, vol. 35 no. 3, pp. 423-455. Rust, R.T. and Kannan, P.K. (2003), ‘E-service: A new paradigm for business in the electronic environment’, Communications of the ACM, vol. 46 no. 6, pp. 36-42. Santos, J. (2003), ‘E-service quality: A model of virtual service quality dimensions’, Managing Service Quality, vol. 13 no. 3, pp. 233-246. Shannon, C. and Weawer, W. (1949), The mathematical theory of communication, University of Illinnois Press, Urbana, IL. Schweizer, H. (2004), ‘E-Learning in Business’, Journal of Management Education, vol. 28 no. 6, pp. 674-692. Surjadjaja, H., Ghosh, S. and Antony, F. (2003), ‘Determining and assessing the determinants of e-service operations’, Managing Service Quality, vol. 13 no. 1, pp. 39- 53. Walsham G. (2006), ‘Doing interpretive research’, European Journal of Information Systems 15 no. 3, pp. 320-330. Wang, Y.-S., Wang, H.-Y. and Shee, D.Y. (2007), ‘Measuring e-learning systems success in an organizational context: Scale development and validation’, Computers in Human Behavior, vol. 23 no. 4, pp. 1792-1808. Yin, R.K. (2003), Case study research: Design and methods (3. ed). SAGE Publications, London. Zhang, X. and Prybutok, V.R. (2005), ‘A consumer perspective of e-service quality’, IEEE Transactions on Engineering Management, vol. 52 no. 4, pp. 461-477.