Connestivism ???

Tagged | Comments Off

Connectivism – George Siemens

In the world of Higher Education and among well-motivated and intelligent students there is probably a case for seeing Connectivism as one theory of learning but not the only one and Siemens conclusion that “The pipe is more important than the content within the pipe” seems almost absurd . . . I doubt whether many oil companies would concur that the oil pipe is more important than the oil that it contains . . . the oil pipe will not per se bring in revenue. The water pipe network in my house will not keep me warm in winter . . . it is, of course, one of the essential elements in my heating system but there are others equally essential, viz. boiler, pump, water, electricity and gas. Take out any of these and I will feel very cold.
“When knowledge, however, is needed, but not known, the ability to plug into sources to meet the requirements becomes a vital skill”. This statement is undoubtedly true, but this “ability” is often a skill learned much earlier in life . . . mostly, the necessary skill has been taught by a skilled tutor, e.g. learning to swim, to play a musical instrument well, etc.
Connectivism cannot be regarded as an all-embracing, universal learning theory; it is more a reminder that we have many more learning tools available to us, living in the Digital Age, and a reminder, too, that technology is changing fast.
Our five-year old children are not likely to pick up an IPad and form a social network so that they can learn to read and write – they are taught to read and write.
Furthermore, Connectivism is mostly, mediated through language and culture, and, in a universal world that is so disparate in many ways, both of these factors can impede successful learning.
Moreover, for a learning process to be successful, students need to be told (or learn) how to discriminate between worthwhile knowledge and that which is worthless or misleading – peer-group networks are not sufficient


Lessons learned from first faculty development session on Milestones

Posted by Dale Coddington in meded | Tagged | Comments Off
I did my first faculty development session on milestones/EPAs (see prior post).  It was an hour long session.  The first half of the session included faculty members, residents and medical students and the second half was just faculty. 

Key Insights:

1) The assigned pre-work was helpful.  The participants did the pre-work.  It gave people background on the project and terminology and allowed us to start the session with a discussion of what surprised people when they rated themselves on the milestones.  It also allowed people who couldn't attend the session to get some background on the topic.

2) Both faculty and residents thought that the milestones used dense "edu-speak" and that it wasn't obvious what was meant by terminology such as "semantic qualifiers" and "illness scripts".  It was suggested that a glossary of terminology would be helpful. 

3) Many faculty assumed that they would be at level 5 (expert) that was not the case when they rated themselves on some of the milestones.   It will be a "culture shift" for faculty and residents to realize that it is ok to not be marked at the "top" of the scale.  It also brought up the question of the "expert" descriptions and if they represent ideal rather than reality.  It would be interesting to have  multiple experienced faculty rate themselves on the milestones and to see if there are certain milestones with a lower percentage of faculty rating themselves as expert.

4) It can be difficult to decide where a resident should be on a milestone.  There was an acknowledgment that a resident may sometimes be at an advanced beginner level and sometimes at a competent level on a milestone such as history taking depending on their familiarity with the presenting chief complaint of the patient.

5) Some of the milestone descriptions are not observable behaviors but rather require "getting inside the head" of the residents to discern their thought process and those milestones will be much more difficult to assess.

6) Residents discussed that that faculty will need to be observing them more and discussing cases in a different way in order to judge where they were in the milestones in a meaningful way.  With multiple rotating faculty, residents wondered how such assessment will take place.

7) Feedback was also discussed by the residents.  Most did not feel that they got much day to day feedback regarding their  skills except after a faculty had done a direct observation (structured clinical observation).  They appreciated some of the inpatient services who did feedback on a weekly basis but said that when the person giving the feedback had only worked with them for a day or two, the feedback was less valuable.

8) It was helpful to have residents and students at the session to understand their experience with assessment and as a way to introduce the new assessment system to them.  It was also helpful to have some time with just the faculty to discuss the logistics of the new system.

9) "Snapshots" of a learner at different levels (novice through expert) for an entrustable professional activity was helpful but will need more discussion and time than half an hour.  After discussing a single snapshot, faculty did not agree when actually evaluating a resident.

10) In the supervision guidelines for entrustable professional activities, "direct supervision" of advanced beginners is needed.  Our faculty did not agree on what "direct supervision" meant.  All agreed that it included repeating a physical exam and key pieces of history.  Some faculty wondered if it meant doing direct observation of the entire encounter including information sharing prior to discharge.  Since this is a recommendation coming from a national organization (Academic Pediatric Association), it would be helpful to have clarification on what is meant by direct supervision and reactive supervision.

11) When we reviewed the entrustable professional activities for our rotations, it was clear that some occur multiple times in a two week rotation (such as provide well child care) and others are less frequent (develop an evaluation and management plan for behavioral, developmental or mental health concerns).  It may be helpful to have a resident "passport" to record feedback on some of the EPAs.

12) All faculty agreed that in order to implement the milestones/EPAs we will need to re-assess how we currently structure the role of the precepting attendings.   Do our current staffing patterns meet the increasing intensity of educational assessment? Probably not.  For example, at Stanford, some faculty are being given 10% time in order to observe residents over time and in different settings on the milestones.  It is not known how much additional faculty time will be needed to effectively implement the new evaluation strategy and who will pay for that time.  It would be helpful if on a national level different programs shared models of faculty assessment strategies.

Assessing Residents in the Era of Entrustable Professional Activities and Milestones

Posted by Dale Coddington in EDCMOOC, meded | Tagged | Comments Off
The Accreditation Council for Graduate Medical Education (ACGME) is implementing the "Next Accreditation System"(NAS) for medical and surgical post-graduate training programs in the United States.  As a part of NAS, programs will be reporting the progress of their residents on milestones of clinical competence that have been developed by each specialty.  The milestones for specialties that will begin reporting this year can be viewed at http://www.jgme.org/toc/jgme/5/1s1.

With the continuing shift to outcomes-based assessment, there is a need for ongoing education of both faculty and residents about this evolution in assessment.  Pediatric residency programs will begin reporting on 21(of the 48) milestones in June 2014. 

Below is a proposed method for education about the Milestones that I am going to pilot for my department (ambulatory pediatrics).  Challenges for education in our department include faculty working in different locations/ different shifts and variability in familiarity with the milestones.

After reviewing the proposed faculty development plan, please feel free to comment and offer suggestions.

Proposed Faculty Development Plan regarding new resident evaluations.

Independent Learning (estimated time 30-45 minutes)
  
1) Review the 15 minute narrated PowerPoint- overview of the milestones and entrustable professional activities (EPAs). Video uses Windows Media player.



2) Do a self- assessment using selected milestones to become familiar with content and method of assessment.  Please complete PC-1,4,5, MK-1, PBL-4, ICS-1, PROF-1, SBP-1,3 of Pediatric Milestones http://www.acgme-nas.org/assets/pdf/Milestones/PediatricsMilestones.pdf

3) Additional readings (optional)

a) CL Carraccio, JB Benson, LJ Nixon, PL Derstine "From the educational bench to the clinical bedside: translating the Dreyfus developmental model to the learning of clinical skills" Acad Med 2008;83:761-767
http://journals.lww.com/academicmedicine/Fulltext/2008/08000/From_the_Educational_Bench_to_the_Clinical.17.aspx

b) PK Hicks et all "The Pediatrics Milestones: Conceptual Framework, Guiding Principles, and Approach to Development" Journal of Graduate Medical Education, Sept 2010; 410-418
www.jgme.org/doi/pdf/10.4300/JGME-D-10-00157.1

c) JR Frank et all "Competency based medical education: theory to practice" Medical Teacher 2010; 32:638-645.
http://groups.medbiq.org/medbiq/download/attachments/15991574/Frank+et+al.+-+Competency-based+medical+education+theory+to+practice.pdf?version=1&modificationDate=1327513526000


Group Learning Session (60- 90 minutes.  Independent learning above done as "pre-work")

1) Guided discussion of the evaluations for our rotation and review of "Snapshots" to provide examples of what novice, advanced beginner, competent, proficient and expert learners would look like for different EPAs.

2) Practice as a group evaluating current or recent residents using the new evaluation system.  Discuss areas that are difficult to evaluate or where there is disagreement.

(For faculty that are community-based we will need to explore additional opportunities for group discussion and learning via webinars or other on-line opportunities)

On-going Application ("post-work" 60 -90 minutes).  You may pick either option. 

1)Attendance at group evaluation sessions of residents (in person or call in)

2) Implement a Plan Do Study Act cycle for an assessment method for the milestones (examples- resident self-assessment on a milestone, chart stimulated recall).   Discuss outcome and lessons learned at monthly provider meeting (or alternate way to share learning for community based physicians)

Faculty would document their work on resident assessment as a part of their annual departmental goals and objectives and would receive institutional faculty development credit for their work. 

  I have now been a …

Tagged | Comments Off
by Deirdre Robson.  

 

I have now been a tutor for the Open University for longer than I would often care to admit.  During that time the 'how to's' of assessment and formative feedback have been an integral part of my academic life.   We were early on instructed in the wonders of the feedback 'sandwich' as a means to draw the student on from simply reading their mark -  first, positive comments and encouragement; followed  the real nitty-gritty of comment and critique about the way in which the brief has been attempted;  bookended by  a final, summative, positive comment.  This can seem somewhat artificial, and I have been told by some students that they don't read the framing comments because they know the real 'meat' is the critical comments.   Good assessment as a (well-prepared) ham sandwich.

Now I discover that I have also been working to a set of ideas about good assessment practice which I knew nothing about: Bales's (1950) interactional categories.  Bales  suggests  four main categories of interaction within feedback: Category A - positive reactions (shows agreement or solidarity); Category B -  attempted answers (gives suggestions, opinion, information); Category C - questions (asks for information, opinions, suggestions); and finally, Category D - negative reactions (shows disagreement, tension, antagonism.  The parallels between these and the 'sandwich' are clear to see.

Wheeler notes "evidence of systematic connections between different types of tutor comment and the level of attainment in assessment", by which I suppose she means the greater likelihood of Category A in high scoring assessments, and a higher occurrence of Categories B and C in lesser grades.  However, the OU definitely doesn't (supposedly) 'do' the Category D: 'negative' comments, which is surely only right, as demoralising weak(er) students is not a good tactic.  The probably need more positive comments, even if sometimes it is pretty difficult to do, apart from phrase a criticism in a positive tone.

We have been introduced to an electronic assessment monitor, 'OpenMentor', based upon Bales' Categories.  It would seem straight-forward to use, and to offer some possibilities.  However, I did have one big problem with the OpenMentor results.  This again is founded very much on my OU experience.  It  was the implication that there had been too much 'teaching' (B responses) in the higher scoring essays.  I wonder at the reason for this?  Is this because OU TMAs are a rather different entity to essays for conventional f2f/blended learning courses?  Does this suggest that OpenMentor is not, in its present incarnation, not really geared toward e-learning/distance learning, which tends to imply that more 'teaching' will be done via comments to the essays?  Does this mean that electronic assessment monitors such as Open Mentor might indeed not be  not discipline specific, as claimed - but might have to be more context specific?

Reference

Bales, R.F. (1950) 'A set of categories for the analysis of small group interaction', American Sociological Review, vol. 15, pp. 257-63.

 

 

‘Authentic Assessment’ – how to evaluate a value judgement

Tagged | Comments Off
by Deirdre Robson.  

 

How to define 'authentic assessment'

A variety of definitions of the term 'authentic assessment' are offered by a variety of educational sites.  There is little obvious clarity in definitions of the term itself.  Such discussions seem only to bear out the observation made by Whitelock & Cross (2012) that it is difficult to define the term, though they suggest that such assessment involves the demonstration of "candidates real life performance skills in the course of the examination rather than the elicitation of inert knowledge" (Whitelock, 2011, cited 2012, p 3). This division would seem to mirror Sfard's 'acquisition vs participation metaphor of learning.

There seems to be more clarity about what format 'authentic assessments' might come in - for instance (Whitelock & Wiggins, 2012; Wiggins, 1990):

  • Collaboration
  • Simulations (role plays, scenarios)
  • Authentic assessments require students to be effective performers with acquired (and contextualised) knowledge.
  • Present the student with a full array of tasks e.g. conducting research; writing, revising and discussing papers; providing an engaging oral analysis of a recent political event; collaborating with others on a debate, etc.
  • Encourages thorough and justifiable answers, performances or products.
  • Examinations take place in 'real world settings
  • 'In tune' with disciplinary mind.
  • There would seem to be a few issues herein. One is the lack of any real examination of the term 'authentic', which is surely a value judgement based upon social constructivist pedagogy. There is nothing wrong with this, though failure to examine first precepts is not a good principle. The concept of what constitutes 'authentic' assessment would also seem to be slanted more to the (hard) sciences and not to fit the (soft) iterative disciplines as easily, though they surely need consideration of assessment tasks just as much. However, how might one apply 'problem tasks that are like those encountered by practitioners or experts in the field' or 'examinations that take place in real world settings' to history, for instance?

References

Wiggins, Grant (1990). The case for authentic assessment. Practical Assessment, Research & Evaluation, 2(2). Retrieved July 18, 2013 from http://PAREonline.net/getvn.asp?v=2&n=2

Whitelock, D. (2011) 'Activating assessment for learning: are we on the way with Web 2.0?' in Lee, M.J.W. and McLoughlin, C. (eds) Web 2.0-based eLearning: Applying Social Informatics for Tertiary Teaching, Hershey, PA, IGI Global, pp. 319-42.

 

Assessment for Learning

Tagged | Comments Off
by Deirdre Robson.  

 

The Assessment Reform Group (1999) would have a  somewhat limited impact upon anyone involved in education reading it in 2013, as what they are proposing has somewhat of the pedagogic status of 'mom and apple pie' - incontrovertible truth.  Notions that it is 'bad'  to  use to marking & grading,  in way which limit self-confidence pupils,  while  not providing advice for improvement, as against (good) aims to  help students to know and recognise standards which are aiming for, or provides feedback which leads students to recognising next steps & how to take them. Though perhaps not all teachers achieve such aims, it is pretty much taken for granted that such aims and objective will be attempted, even if  might be challenging to achieve the kind of teaching which would facilitate such goals.

I personally found their suggestions to be  thought-provoking -  i.e. observing pupils (listening to how they describe their own thinking processes); using open questions which phrased to encourage pupils to use own reasoning; setting tasks in such a way that pupils required to use certain skills/ideas; asking pupils to communicate thinking through variety of means e.g. drawing, mind maps, role play etc; discussing words & how they used - though like most thought-provoking proposals how to achieve such open aims and objectives is easier said than done.

A final thought, however. Such advice was developed in terms of primary and secondary school teaching - what qualifiers might there be when on is considering higher or continuing education? Also, factors such as 'observation' are obviously based upon f2f teaching - how to effect this successfully in an e-elearning context?

Reference

Assessment Reform Group (ARG) (1999) Assessment for Learning: Beyond the Black Box [online], http://assessmentreformgroup.files.wordpress.com/ 2012/ 01/ beyond_blackbox.pdf

 

Mind-shifting on assessment

Posted by mdvfunes in assessment for learning, Bobby Elliot, e-Assessment, Educational technology, H817, Personal Assessment Environment | Tagged | Comments Off

Assessment is the process of measuring a person’s knowledge or skills. It’s not a science; it doesn’t prove anything, but passing a test or completing a practical task implies a certain level of competency. A special type of assessment (called formative assessment) is used to aid the learning process (this is called ‘assessment for learning’).

Bobby Elliot, 2003.

Assessment 2.0

This table has changed my life! Well, may be not quite, but it has changed my perspective on assessment. My environment is higher education, private higher education. This has its own issues when assessing student’s work. I teach on a Masters in People an d Organisational development at a business school in the United Kingdom. Sometimes, students come from client organisations, sometimes they can become clients after the course, and sometimes they come to work for us as consultants. Transparency in assessment is important here, as is layers of peer review and checking standards of assessment across the faculty. How do we know that standards are comparable across the faculty? We implement a riguourous assessment process that is defined each time we start working with a new cohort – self managed learning works within a framework  that Elley (1993) defines as collaborative assessment. Her focus is on how power is used in the process – ‘power together’ as compared to ‘power over’ the student or the educational establishment:

This model of power assumes that student, peers and staff work together to secure a common view of assessment and its outcomes, based on hearing and understanding different perspectives, and seeking to secure agreement which values all perspectives. This model is essentially collaborative, dependent on reaching consensus.

There are delightful and tangled issues that arise from this assessment model, not least the point of friction between traditional university assessment methods and how we assess students at my business school. Until today I had always seen collaborative assessment as a cross I had to bear for working at a non-traditional university. Now I see that I have been working at the leading edge of assessment for many years and have never stopped to critically reflect on the heutagogy that is implied in self managed learning as an approach to education.

Notification Center

What Elliot (2008) crystallised for me in his table above is that in traditional education we shoe horn evidence for learning into a shape fixed by the educational establishment in order for it to award its accreditations. This is contrasted with what in the literature is defined as ‘assessment for learning':

“the process of seeking and interpreting evidence for use by learners and their teachers to decide where the learners are in their learning, where they need to go and how best to get there” (ARG, 2002).

This definition obscures the core question: who determines what counts as evidence of  arriving ‘there’? In my world of work this is a collaborative process revisited afresh at the start of each cohort. Most of the literature I have read on this focusses on schools and how assessment needs to be redefined to support pupils on learning rather than exercise power over them by assessment of learning. This is not my area of work, so I will say no more than that I understand that there is a lot of prescriptive work that needs to be done to help teachers and educational government departments understand this distinction. Self managed learning is being used in schools by some pioneers but I know little about how successful these initiatives have been. I count myself lucky that my focus on assessment is in dealing with the downsides of being at the other extreme of the assessment for learning continuum: When I started to read about assessment for learning from my ivory tower, I just asked: Is there any other kind?

I want to focus the rest of this post on the content of Elliot’s table above. A key insight is summed  up by Al-Rousi (2013) and focuses on the type of evidence that supports learning:

Elliott [is] focused on the use of digital evidence [... ] naturally occurring, [ i.e.] already existing [...] not created solely for assessment purposes, [manifested] through multimedia, [and] distributed across different sources.

Elliot is indeed saying that instead of shoehorning evidence we might choose to purposefully build in the use of naturally occurring evidence into our assessment process. He is further saying that we need to use web 2.0 tools to develop an e-assessment strategy and this he calls Assessment 2.0. Self Managed learning works with naturally occurring evidence, but has no e-assesment strategy embedded in its approach. Collaborative assessment in self managed learning ensures that the evidence to be sought is outlined in the learning contract upfront and this can be any type of evidence that supports the learning outcomes being defined. We encourage students to use wide sources of evidence such as video files, audio files, essays, reports, flowcharts, lesson plans, storytelling, painting, spreadsheets and self-assessment statements.

What we are not doing enough of is looking at the use of digital evidence to support learning in an embedded way. If we define e-assessment as anything that involves digital media, then we have been doing it for years – word document are submitted, we add our formative feedback via Track Changes, use spreadsheets to tabulate data, create research reports, etc. This is not what Elliot intends to suggest when he talks about assessment 2.0 in my view. He quotes Downes (2006) notion of a personal learning environment and posits the need for a Personal Assessment Environment (PAE) where students use the type of web 2.0 tools exemplified in his table to critically reflect on what it means to provide evidence for learning, set it up before getting on with the business of learning, harvesting it for insights regularly and then ordering it in a meaningful way to demonstrate achievement of a given standard which in my domain is Masters standards. This notion is game changing for me. It implies that digital literacy would  no longer be a choice for our faculty or our students, that an e-assessment strategy has to be agreed and implemented to further support students that goes beyond just allowing students to present their evidence as e-portfolios (which by the way we still do not allow for administrative reasons: students have to print to copies of their portfolio on paper and hand in by a specific date…).

Whilst Self managed learning meets most of Elliot’s characteristics when assessing for learning in terms of its principles of operation (e.g. being collaborative, peer and self assessed) it falls down when assessed against his ‘tool supported’ characteristic. Some may argue that what matters most is that the principles are adhered when developing effective assessment strategies in any educational domain, that tools used are a secondary consideration. I disagree with this assertion. Our thinking is shaped by the tools we use. Writing a blog is not the same as writing a letter or and essay. Assessment for learning in a self managed learning Masters means supporting students in the creation of a PAE to gather tangible evidence through their 18 months learning journey. Our challenge in this is almost as hard as that of other educational sectors when shifting from one preposition to another. We already assess students for learning, but how might we design assessment 2.0 into our work?

Notification Center-1

A modest experiment I am carrying out is the use of Pin-Interest to support evidence gathering and dialogue for one of my student’s learning contract. The board’s theme only makes sense in relation to the student’s learning contract, we agreed to keep comments general enough for the board to be public but specific enough that the student could track her learning journey when it came to writing up. We also agreed that the board would be part of the evidence used to support her self assessment statement on achievement of learning goals for the Masters. However, whilst we are using it for formative feedback, I fear that, at best, screenshots of the board will be all that makes it to the final portfolio.

My understanding of what Elliot proposes with Assessment 2.0 is that we need to incorporate the distributed nature of digital evidence (amongst other characteristics he discusses) into the way we assess students rather than students having to shape their evidence into a fixed format limited by low digital literacy in certain sectors of the educational establishment. In embedding these tools into the formative stages of learning we would be enhancing the quality of their thinking and preparing them to develop a digital identity that can support them in their future career goals – given the ever increasing need to learn to function effectively online for most professionals.

Al-Rousi, S. (2013) ‘Does WEb 2.0 = Assessment 2.0′  http://learn.open.ac.uk/mod/oublog/view.php?user=1124720 

Assessment Reform Group (2002) Assessment for Learning: 10 Principles http://assessmentreformgroup.files.wordpress.com/2012/01/10principles_english.pdf

Eley, Ginney. “Reviewing Self-Managed Learning Assessment.” , 1993. http://www.heacademy.ac.uk/assets/documents/resources/heca/heca_lm09.pdf.

Elliot, Bobby. “Assessment 2.0.” , Sept. 2008. http://www.scribd.com/doc/461041/Assessment-20.


How open online education works

Posted by Luis López-Cano in Colaboraciones, Música, Openness in Education, video | Tagged | Comments Off
A lightning-presentation by Cinzia Gabellini about the basics in open online education
Music by Luis López-Cano & GarageBand for iPadMini.

(If you need music for your presentations, just send me an e-mail )






Licencia de Creative Commons

Digital can be both tradition and the future, make your choice !

Posted by CARSTEN WILHELM in Articles, blended learning, e-learning, identité numérique, MOOC, oer, open access, Open Learning, pédagogie numérique | Tagged | Comments Off

Comparing Moocs is a bit like comparing trees, every tree is unique even though there are definitely families and shared or distinctive traits. The general comparison made most often concerning MOOCS is certainly the one between cMOOCs and xMOOCs.
This distinction is important to understand the construction and impact of seemingly similar formats, the motivation to built such courses and the impact sought and obtained.
The connectivist brand of MOOCs promoted by G.Siemens and S.Downes, amongst others, is built upon a pedagogical and communicational premise : the one that connection is key and commutation the building block of learning. In this regard this type of course is close to the logic of social networks where the number of connections is most important, moreso than the quality of connections is initially.
They often rely on openly available technology and prefer simple and robust systems to esthetically pleasing but proprietary formats.
The xMOOC variety as the courses by courser and audacity, EdX and the likes are often called reproduce the classic course format of lecturing and exams and gain from the popularity of their experts and the affiliated universities rather than from the opportunity to learn from other participants. The generally use specifically developed platforms and operate in the manner of startups, as middlemen between experts , institutions and the general public.
Both embrace the fact that their offerings can be accessed by anyone connected to the internet and able to surf (AND speaking english). This last point is not without importance since even with English as a world language the barrier towards second language education remains.
The difference between the D106 and Change-mooc and the offerings of the Mooc-sellers’ (as I want to call them) build on this general difference but also happen on a slightly different level.
One of the drawings on the DS106 website clearly expresses the difference in the approach : « We need to think differently about our culture. This is not simply augmenting our experience with technology. Claim your space, review, remix, make meaning, make art, dammit ! ». The Change mood relies heavily on collaborative reflection on online interaction. Both of these MOOCs thus stress the active role of the participants, not only asking for a production to validate a learning experience but by putting the act of creating and interacting on others’ productions in the center of the learning experience, this is especially true for the DS106 MOOC.
The technology necessary to participate in both types is web navigation technology. Even though the technology is much more varied in the open (c-) formats since the ball is in the participants field, who can mobilize the technology at hand to participate and thus produce complex mashups or original creations.
Wheres as the xMOOCS certainly reflect a more traditional approach to teaching (sic!) the more open types of MOOCS explore new forms of learning (sic!) and as such represent the real pedagogical difference.
The difficulty certainly arises when it comes to accreditation (if that’s the goal !) since whereas xMOOCS often rely on traditional forms of evaluation (multiple choice, essay), the creative and open MOOCS have to be more open and flexible in their evaluation as well. But then again, new ways and means are to be developed to take into account these forms much as lifelong learning in general obliges us to rethink evaluation. The badges option, as also used in the open H817 is an interesting alternative in that sense…. to be continued ….

Mutualisation of OER : the eternal comeback of the spotless author ?

Posted by CARSTEN WILHELM in auteurité, author, blended learning, e-learning, édition numérique, identité numérique, oer, open access, Open Learning, présence numérique | Tagged | Comments Off

The  H817 open education learning time has been great. After all, I engaged in this course with quite some questions and even doubts about OER. I wanted to know more and learn not only about the practical aspects of production which I had already experimented quite a bit but also about the particular nature of these resources and their life after going online. In my past 10 years of practice as a co-producer, co-author and administrator of such resources at a public university and also as a researcher into the use and communication of learning I often came across the paradox that structures the whole debate : The wide gap between available and ‘produsable’ resources and the obstacles that make (re-)using someone else’s objects apparently so unlikely as the usage surveys and interviews with practitioners show (Boyer, 2011).
The French digital thematic universities (DTU) – that have been created since 2004 (more like repositories for resources) have been a fascinating terrain to observe the evolution of this field. Largely funded in the inception phase, the project-based production developed into full-fledged online portals for higher education resources in various disciplines and various forms (video, interactive multimedia, text-based…), manifest in 7 thematic websites all accessible via http://universites-numeriques.fr/. On these websites one can find very different approaches to online learning objects, from lecture taping to interactive resources with various levels of depth (glossaries, indexes, chaptering, menus-in-video, links etc…). Few are collaborative in use although that is more of a task of the end-user and some of the digital portals offer access to forums of comments.
These over 20.000 resources are not used enough, a 2011 (Boyer) stated. Some of the readings also reflect these issues. The 2005 early UNESCO report states « fashioning OER that can be scaled up or down to adequately meet education requirements » as one of the challenges (Albright, 2005).
Terry Anderson tackled the question in a 2009 speech at the ICDE Conference in the Netherlands. His take on things suggests that authors who do not want to reuse material made by others do this by an overly personal interpretation of their role and in relative ignorance of the economic benefits of sharing. I share a number of Andersons positions, but maybe the question is a little more complex still.
What really are the barriers to using « non-personal » OERs ? Do such resources exist ? Is the author absent from her or his resource ? How does praxis transpire in OERs ? From resources found by accident on the internet to highly documented resources specifically produced for sharing (mutualisation) with metadata and learning scenarios, the common problem seems to lie less in technical abilities but mores in the symbolic realm, i.e. in the representation that those who use this type of learning object have of their task and job and of what type of information package a.k.a learning object it should be.
The recent MOOC hype is interesting in this respect : xMOOCS (the most hyped kind) allow for identifiable authorship (a Mooc by Stanford professor etc..) and introduce the author back into the playing field targeting institutions more than the individual teacher.
Is this eternal comeback of the author, which we are also witnessing in the indexing of general resources on the web (page rank vs author rank) an inevitable tendency and why is that ? I do not have the answer for this blogpost but I think the question is one of the most exiting ones, linking professional identity , culture (of teaching and learning) and intercultural (translation) aspects.
There is also an evident link to research, since research has the same problem : who is the author, who gets credited and why but that’s another blog post.
Cultural and Intercultural differences play a role that in my view needs to be studied a lot more since OER don’t know boundaries. Together with a team of German and French researchers we have submitted a research proposal in that respect, by the way, I’ll keep you posted if it goes through.

References :

Albright, P. (2005) « UNESCO (IIEP): Final forum report.  » 2008-09-01 http://learn.creativecommons.org/wp-content/uploads/2008/03/oerforumfinalreport.pdf
Anderson, T. (2009) « Are we ready for OER ? » 23rd ICDE conference, 7-10 june 2009, Maastricht, Netherlands.
Boyer, A. (2011) « Les Universités Numériques Thématiques : Bilan » , Rubrique de la Revue STICEF, Volume 18, 2011, ISSN : 1764-7223, mis en ligne le 14/02/2012, http://sticef.org