Log on:
Powered by Elgg

Tim Dalton :: Blog

October 21, 2011

 

The PebblePad ePortfolio system is used to support personal learning spaces in the University of Edinburgh

If on EASE, you can log in via http://www.pebblepad.co.uk/edinburgh/pebblepad.aspx or launch from a button on MyEd with EASE login as student (MyEd - under Studies tab) or staff (MyEd - under Teaching tab).

My first experience indicated that creation of a trivial note with a few pasted web links was a very time consuming process, far beyond its value... and the resulting links note was poorly accessible with many steps to retrieve it or edit it.  The system seems designed for a very small number of assets rather than many tyhousands of assets in complex structures that would be needed in a serious personal learning environment for the future.

Keywords: IDEL11, PebblePad, PLE

Posted by Austin Tate | 1 comment(s)

I am reading some papers by Don Norman, and one on "Distributed Cognition" (Norman, 1993) makes some very nice points about the value of large situation rooms and operations centres for providing a joint view of the current situation and actions being taken in complex environments such as power station control rooms and emergency response centres.  I have been in such centres for real and training situations, for natural disaster response in Tokyo, for a nuclear power station in the UK and for search and rescue coordination in the UK and the USA.  They are all set up to allow for people to gather round or have a view of screens and see information in a shared environment.. the operators and responders are not all looking at their own screen separately... though of course they do that to use specialised tools, information and communications which they bring to the shared space.

In our work we have sought to replicate this sort of shared situation space, as a basis for human centric decision support.  When we started to embody our technology in virtual worlds we wanted to replicate some of the benefits of this, and indeed provide a shared space for distributed participants, as is often the need in complex multinational emergencies.  We are sometimes asked why we want to replicate rooms with walls when we are in virtual worlds, and I respond that we want the wall space for displays and distinct functional areas that everyone can remember and use.

In our I-Rooms (http://openvce.net/iroom) we have a shared central space in which participants gather and communicate, and from which viewpoint they can direct their attention to any of four functional areas set in a cyclic pattern to allow for situation assessment, option exploration, briefing and external communications.  It supports the OODA Loop (http://en.wikipedia.org/wiki/OODA_loop) as an underlying approach and lets us place human and intelligent systems support into a meaningful whole which all the participants can involve themselves in as appropriate.

Reference: Norman (1993) Things that make us smart : defending human attributes in the age of the machine . Reading, Mass., Addison-Wesley Pub. Co. Chapter 6; Distributed Cognition (139 – 154).

Keywords: IDEL11, ULOE11

Posted by Austin Tate | 0 comment(s)

October 20, 2011

Keywords: IDEL11

Posted by Austin Tate | 0 comment(s)

Keywords: IDEL11, ULOE11

Posted by Austin Tate | 0 comment(s)

Discussion on avatar identity and "personhood" using papers by Tom Boellstorff and James Paul Gee as a basis...  in the Cloud Space discussion area over Holyrood Park in the Vue South region in Second Life.

Keywords: IDEL11

Posted by Austin Tate | 0 comment(s)

October 19, 2011

Reading Boellstorff (2008) and his stories of virtual world encounters.... I have some related observations. This may get a bit deep and multi-layered.. I like layers of storytelling and meaning :-)

Some of you may (or may not) have noticed that my avatar changed appearance during the Second Life building tutorial this week. My normal bearded avatar and flight suit outfit (there is a whole history behind that too) .. changed to be a little red round ball. Why?

When "I" (Austin) am "he" (Ai) he normally shows attention and is responsive to what is happening around. I do not like "busy" and "afk" indicators and prefer to log out - or go elsewhere in world. I am not happy to leave my avatar unattended and feel it would be rude to do so... though I have no problem with others adopting that style of use of virtual worlds.

For a few years I used some text only and mobile device or low bandwidth non-graphical clients like Radegast and iPad's Pocket Metaverse. I was always unhappy that I had no idea what my avatar would look like, how it would be positioned, that it might face wrongly to those I interacted with, and it was difficult to make the avatar appear such that it was clear I was on a text chat/IM only client.

So I put some effort into designing an avatar that reflected this state of affairs. This was a Personal Satellite Assistant (PSA)... a real device NASA is working on for the Space Station that uses AI technology. It acts as an assistant to relay messages, give instructions and help, and record via camera things going on in experiments in the Space Station. It hovers near astronauts to help them, or can be sent to perform tasks. It has a screen on its front to show astronauts images, video, messages, etc. I have explicit permission from NASA Ames Research Lab to use the image of the skin of this device in my work and in virtual worlds .

 

I have used a sphere with this PSA skin for a number of AI driven and autonomous devices in Second Life for several years. Enter any I-Room (http://openvce.net/iroom) and there will usually be one at the entrance to act as a greeter or sensor sending back visitor and status information to our intelligent system over the web.

So I created the Ai PSA Avatar with the PSA shape, size and skin, and showed on the screen a portrait image of "Ai" to show its him that is watching as if over a video teleconference link - i.e. not immersive and "in world" fully.

Even though not on a low bandwidth or text client at the SL building tutorial, my attention was elsewhere. In fact my camera was not even in the same region as the tutorial space. I was looking at an object in a distant region that had the properties I wanted to copy to replicate a complex object I did not know how to build. But I did not feel comfortable just leaving "Ai" unattended... and did not want to fly away to get the information. I have the same issue when I am looking at web pages, or using other applications alongside the Second Life viewer. This was a case when it felt exactly right to use the Ai PSA avatar.

I see this as "Ai" looking through the "PSA" robot floating in the meeting space... "I" am behind "Ai" but its "Ai" that is disconnected from the meeting space.

Boellstorff, T. (2008). Personhood. In Coming of Age in Second Life (pp. 118-150). Oxford: Princeton University Press.

[First posted on IDEL11 Discussion Forum, 19-Oct-2011]

Keywords: IDEL11

Posted by Austin Tate | 1 comment(s)

As part of my "Learning Challenge" for the Understanding Learning in the On-line Environment module, I have now had my first lesson... it was exciting going to class again and in a totally different environment. Reminds of the the great buzz I always sense at the start of each new academic year amongst students and staff!

There was a LOT to take in.. but Karen Temple who is training me took things step by step. She was keen not to over do the theory and looking at books, so I got introduced right away to my "model" for the day... a disembodied head on a tripod.. but with a lovely head of hair on her to work on. It was washed and left tousled to let me learn on it.

But first we went through the various brush types... and parts of the comb. See http://atate.org/mscel/hair/. Then onto how the hair is "sectioned" to allow it to be worked on in parts and layers. It was very tricky to know where to place your hands and fingers to get best grip on the hair... and I was not separating the parts very well. I realised I was thinking about it a bit too much and when I did it a bit sloppier (at first) I got the rhythm more I think.

It took some two hours to fix my model's hair this first time. That would be a LONG appointment. Anyway she has come home with me now for homework. So I am asked to go in next week and show Karen how I can do the whole job. And the plan is that I will then be let loose on a live model. Now that will be a thrilling experience for me... and I bet for her - hopefully not in the horror film sense!

Keywords: Hair, IDEL11, ULOE11

Posted by Austin Tate | 0 comment(s)

October 18, 2011

Keywords: IDEL11

Posted by Austin Tate | 0 comment(s)

October 17, 2011

Second Life Building Tutorial

Second Life Building Tutorial

Keywords: IDEL11

Posted by Austin Tate | 0 comment(s)

Avatar Identity

Avatar Identity exercise - are the avatar clones me, us or them? See http://atate.org/ai/ai/ for more...

Keywords: IDEL11

Posted by Austin Tate | 0 comment(s)

<< Back Next >>