Saturday, December 9, 2006

Accessibility Policy

specs.gif

From LVCCLD's website: "Diversity applies to more than race and ethnicity. It applies to physical disabilities, sexual orientation, age, language and social class." Though LVCCLD's policy addresses services for diverse populations, it does not directly address web accessibility. Like many districts it does not yet have any tools that would support web accessibility for patrons with disabilities. These tools are necessary, and I will explain why, as well as giving examples of what tools might be implemented.

ALA
states:
In response to two of the 1990 American Disabilities Act’s (ADA) mandates; any public library must provide equal services to any person requesting them, regardless of disability and that no qualified individual with a disability shall be excluded from participation or be denied services or be subjected to discrimination.

This mandate has been in effect since 1990. I believe many public libraries have attempted to provide such services. For example, a library I worked for specified the distance between any two fixed objects must be three feet or more, to allow wheelchair access. They also had a TTY
miniprint_225.jpg
device which is a special device that lets people who are deaf, hard of hearing, or speech-impaired use the telephone to communicate. However, with the light years of advancement made in technology since 1990, new needs must be considered as a part of providing access. The ADA is broadly enough defined that we can include web accessibility.


ist2_1774600_cartoon_suit_guy.jpg
"I need some solid justifications if the library is to spend all this money on accessibility"

1. Why? Because the ADA instructs us to
.
Making the web accessible appeals to the fundamental nature of libraries themselves. Access in order to promote intellectual freedom has always been important, and that access is inhibited for people with disabilities in a regular library setting. Even something as simple as a desktop that can elevated or lowered would make a difference to a person in a wheelchair. There are many disabilities, some obvious and some that are hidden. All of those disabilities need to be addressed. Both software and hardware is going to have an impact on accessibility.

2. It would attract people with disabilities to use the library.
In terms of the library today, many of our users are going to a library to use a computer. It is important to have wider aisles, but that only helps 20% of the disabled users in wheelchairs. If the majority of the 'abled' population wants internet access, then it is only logical that the disabled population wants access as well. These people are part of the voting population, too. When a bid comes up on the ballot, the library, by providing for people with disabilities, may have a stronger position in the community.

3. It would also position the library as an information access point for people with disabilities, their relatives, and their services providers.
In this case, the library is attracting and providing services and information for the people who have disabilities and those that are tangentially related to them. Like a good medical reference selection, information and examples of assisitive technologies can benefit the community as a whole.

4. We want to guarantee that library services are available on an equal basis to all members of the community.
Making the web accessible appeals to the fundamental nature of libraries themselves. Access in order to promote intellectual freedom has always been important, and that access is inhibited for people with disabilities in a regular library setting.

5. The benefits of accessible web design extend beyond the community of people with disabilities and an aging population.
It enables low technology to access high technology (Waddell), which is a very strong argument. Not only would these adaptive/assistive technologies help those with disabilities, but can help provide access to the least advantaged members of our society, the poor. This again appeals to our ethical duties as librarians.

How? What?
A representative from the Disability Resources Center, Dawn Hunziker, presented a variety of tools that could be utilized in libraries for increasing web accessibility. There are several categories of assistive hardware and software; text to speech programs, voice recognition, word prediction, inspiration, screen magnification and screen readers. Of these, Dawn recommended two to start out with. After doing the readings, I agree that screen magnification and screen readers would have an immediate impact and increase accessibility exponentially.

In two instances, the library does not need to risk any amount of money. There are two programs, at least, that are free. ReadPlease and Natural Reader are two of them. Libraries could ease into web accessibility with programs like these.
Another way libraries might save some money is by purchasing one or two macs. Apple has assistive technologies built in to their operating system.

Even changing the default browser might make a difference. According to WebAIM:
The focus that Mozilla Firefox places on web standards and the user experience is quickly making it a popular choice for both web developers and end users alike. Firefox is also becoming a popular browser on the accessibility front. It's Open Source nature and extensibility are allowing Firefox to be a powerful medium for increased accessibility of web content.


The library must also keep up with Section 508 of the Rehabilitation Act. It outlines accessibility related to html, java and other plug-ins. To make this easier, an interesting tool can be found at Cynthia Says. This allows you to test a website against usability standards. University of Arizona's site did well, with some warnings. I tested LVCCLD's site and was confronted by a long list of errors. A demonstration of how inaccessible a library's website is may go a long way to convincing staff and others to adopt new technologies.
Works Cited
Hensley, J. (2005) Adaptive Technologies in "Technology for the Rest of Us", Westport, CT: Libraries Unlimited.

W3C, Web Accessibility Initiative. Accessed 12/07/06. http://www.w3.org/WAI/

ASCLA, Association of Specialized and Cooperative Library Agencies. Accessed 12/07/06. http://www.ala.org/ala/ascla/asclaourassoc/guidetopoliciesb/guidepolicies.htm

Waddell, C. Applying the ADA to the Internet: A Web Accessibility Standard. Accessed 12/07/06. http://www.icdri.org/CynthiaW/applying_the_ada_to_the_internet.htm

Cynthia Says. Accessed 12/07/06. http://www.icdri.org/test_your_site_now.htm

WebAIM. Accessed 12/07/06. http://www.webaim.org

Digitiaztion Not Preservation

"Technology allows us to see things we might otherwise miss, hear things we might otherwise fail to notice, and learn from those who came before about our past, our present, and possibilities for our future."

This summer I took a class in Archives and was under the impression that digitization was a form of preservation. Because of this, I was quite surprised to read Hastings' and Lewis's assertion in Chapter 11, "Let's Get Digital", that "digitization is not preservation". I understand it now like this; that digitization is a means of providing access, but does not preserve the item itself. And those things that are 'born digital' don't need digitization, but preservation as well, so that people can access them in new formats. So the quote from the CDH is quite apropos, that technology truly allows us access to things we might never have seen. Access is one of the most important things that libraries provide. It is part of intellectual freedom which is a natural right that everyone as a rational autonomous being has. Digitization facilitates this right.
So, what institutions are practicing digitization which in turn provides better access?

Cornell University
According to the digitization blog, Cornell has apparently been a long-time partner with Microsoft and has just agreed to participate in Microsoft's Live Book Search.
The initiative will focus on works already in the public domain and allow students, researchers, and scholars to use Live Book Search to locate and read books from Cornell University Library's outstanding collections regardless of where they reside in the world. It supports both the library's long-standing commitment to make its collections broadly available and Cornell President David Skorton's goal to increase the impact of the university beyond campus boundaries.
Cornell's digital library site is quite advanced. I wanted to explore what they might be offering Microsoft and came across the Edgar Allen Poe digital exhibition. They have digitized a large portion of the collection, so that you don't have to go to Cornell to see it. There are pictures, manuscripts, playbills and newspaper entries all from the late 1700s to the early 1800s. I do not know if these manuscripts will be part of what is offered to Microsoft, but they are a brilliant example of what can be done with digitization.

Online Archive of California
As part of our activities for this unit, we were to check out the OAC. I realized I have been to this site before when I did research for my archive class paper. So this time I searched for comics, with the knowledge of the institutes's goal. I was unable to locate any online, though some were before copyright date. In this instance I imagine that the person who donated the collection may have made a formal request that they be available only in person. Their mission seems to be a working realization though. The OAC looks to develop extensive finding aids, and utilize them in a single online database. The finding aids for the objects I saw were quite extensive. They listed contents by box, and sometimes by folder.

DLCMS (digital library content management system)
I found this project to be interesting because the author of the blog is developing this content in Drupal. Drupal is significant because it is the place we upload our ePortfolios to. Mark Jordan's "goal is to develop a single Drupal module, called DLCMS, that packages up the document handlers and allows implementors to create a digital collection quickly and easily without having to perform unreasonable amounts of configuration or customization." From our other readings I know that OCLC is trying to do something similar with CONTENTdm. But what I find interesting about Jordan's project is that he is using pre-existing software, another example of 'mashing-up' two things to get one superior product. This product seems very important. The fact that there is nothing out there that everyone is using creates a large void. I wonder how many other people are trying to do this. Any archival page that I've visited seems quite limited in what it does. There is no set of rules for a finding aid, unlike the creation of a MARC record, for example. Perhaps CONTENTdm will do this. But until then it is necessary to develop programs like Jordan is doing.

Digital Library Federation
The Digital Library Federation is an organization committed to preserving digital information and helping others do the same. This reflects the missions of the digitization projects I've looked at, and the software aspect as well.
The Digital Library Federation is an international association of libraries and allied institutions. Its mission is to enable new research and scholarship of its members, students, scholars, lifelong learners, and the general public by developing an international network of digital libraries. DLF relies on collaboration, the expertise of its members, and a nimble, flexible, organizational structure to fulfill its mission.
For our wiki, my group was assigned the DLF. The DLF focuses on and helps support five aspects of digital libraries: digital collections, digital production, digital preservation, usage and users, and digital library architectures. I thought I would delve deeper into digital library architecture.
THe DLF utilizes FEDORA, a content management program. FEDORA is an open source software that gives organizations a flexible service-oriented architecture for managing and delivering their digital content. Not all the libraries affiliated with the DLF utilize FEDORA, but I can imagine how amazing it would be if they could. With one program, all the individual databases could conceivably be searched. While this venture does not do the digitization itself, it seems to be a support center and network for those that do.
Conclusion
This unit made me much more familiar with digitization techniques. After reviewing Cornell's digital imaging tutorial, I feel I have a better graspe of how to get the objects digitized. The tutorial was very well done, I liked the interactive questions. So after doing the tutorial, and our readings, I was able to better understand the projects that I learned about on the digitization blog. It's absolutely amazing how many of our library school classes intersect. In this blog alone I utilized knowledge from my ethics class, archives and all that we've learned in this, Introduction to Information Technology.

AZLA and Web 2.0

logoA_rev_web.jpg

Tales, Tips and Tools: Google in Your Library
At the Arizona Library Association Conference, November 15th-16th, I attended several presentations that dealt with the new technologies that librarians should start utilizing. The first was "Tales, Tips and Tools: Google in Your Library" presented by Ben Bunnell, Manager-Library Partnerships at Google, who also holds an MLS. He talked about Google's advanced search feature, the book search and Google Scholar. But what I found most relevant was what he told the audience about Google Co-op. This is essentially software that someone can use to create their own database and post in on their site, or have it hosted by google. The implications that this holds for libraries are numerous. Right away I thought that librarians need to utilize this tool. I believe that we could add a link to the library homepage that looks like the google search box, but only searches websites added by the library staff. In this way it is collaborative, and could satisfy librarians and patrons alike. Librarians would know that more reputable sites are being searched, and patrons can do it in the google format they like. I think even having this as the homepage for library computers would be great. I can't wait to try it out.
He mentioned another feature of Google Co-op that could be used in libraries. "Subscribed Links allow you to add custom search results to Google search for users who trust you. You can display links to your services for your customers, provide news and status information updated in near-real-time, answer questions, calculate useful quantities, and more." This might be added to the Google pages in the library system.

Podcasting: Syndicating Your Library to the World
ASU has a Library Channel that is an excellent example of what libraries can do with technology to reach out to the community. The podcasts include an RSS feed, so that anyone can keep up to date with the podcasts and latest news. The Library Channel doesn't stop at podcasting either. They include audio tours, streaming video, and library news. I think I am a more visual learner, because while podcasts don't always catch my attention, the streaming video did. If I were to use this in a library I would do regular video podcasts, or powerpoints on subjects of interest with overlaying audio.

Overall, I came away from the conference hopeful that librarians will utilize some of this new technology. The google presentation was packed, and even though the podcasting was the last presentation of the last day, they had a good number of people who stayed for it.

The Electric Krug-Aid Acid Test

Krug's Trunk Test
I really had an enlightening experience performing the trunk test on these sites. I can only hope that more designers take these valuable guidelines into consideration. There have been numerous times that I have gone so far into a website that I haven't been able to get back. Many times it is a website that I hit upon in google, so my navigation toolbar doesn't even have the original address. It's a very frustrating experience, and I feel following these simple design rules will make that frustration disappear.
The first site I applied the trunk test was to a site of my own choosing, the Phoenix Mars Lander site, in particular the multimedia page, which I found while reading about an upcoming event here at the University of Arizona. The Phoenix Lander is a project that is taking place here at the UA. The Department of Planetary Sciences has a mock Mars landscape and a prototype lander that it will test before launching the real lander in 2007. After printing out the site I found that it did a good job of holding up to Krug's acid test. The site had a clear ID, or at least I thought so at first. There is the big splashy bar that says Phoenix Mars Lander 2007, which will also take you back to the home page if you click on it-one of Krug's recommended details. However, above that is the NASA site ID logo, that, when clicked, will take you somewhere completely different and you can only go back via the back button, no persistent navigation. When you hover the mouse over it, though, the floating text says external link. And since the NASA site ID is much smaller, it seems to be only a minor issue. I would fix that by relocating the NASA site ID to somewhere that the user wouldn't expect the main site ID to be. The sections of the site are in the 'right' places, at the top and horizontal. The local navigation is also quite clear, I believe it helps that there is not much to navigate on this particular page. Speaking of pages, the page name is also quite visible and simple. It is a page that has the multimedia links for the site, and is simply titled 'Multimedia', both at the top of the page section, and in the You Are Here indicator. The Search box is clearly placed, however, there is no 'Go' button, or any button at all. This is potentially a problem for users who don't know that pressing the 'Enter' key will activate the search. When I was working with kids at the library, the browser that the library used did not have the Go button. This was extremely puzzling for the parents as well as the kids. If they can't even get started, how will they get anywhere?
I liked the site, so I clicked around a little more, and was disappointed by one thing. The sections on the left: For Kids, Students, Educators and Media & Press were clear, I thought. However, as the two screen capture shots will show, the For Students page and Multimedia page are two different sites.
The Multimedia page is an example of most of the pages that have persistent navigation in terms of the Phoenix Mars Lander 2007 site:
phoenixpagemutlimedia.jpg

The For Students links you to a completely different website:
phoenixstudentspage.jpg
It's informative, but this page would fail the trunk test.
Food Network
The next page I examined in regards to the trunk test was the Food Network's party ideas page, except that it was actually the "Home Entertaining - Gourmet Cooking, Wine, Spirits, Holiday Recipes & Video Tips" page, which was not specified anywhere but in the title bar. But, failing that item, it held up fairly well to the trunk test. The only confusion the page caused me was in its local navigation, which was split between the left hand side column, some navigation links in the middle next to the ad, and finally more at the bottom of the page. I think they need to group these links together in order to make the page more user-friendly. However, one thing Krug mentioned that isn't part of the acid-test per se, is utilities. The Food Network seemed to get this right, since I love a page that will offer a site map. The site map allows for easier searching if I get frustrated by the navigational tools. I liked the page though, and might just try that German Cheddar and Beer Fondue recipe!Beer%20Stein.gif

Backpacker.com
Backpacker.com's gear site was not bad, but probably my least favorite designed sites of the ones I reviewed. my first impression was that it was very busy. Many of the test elements were there, and this was one came closest to having a visible page name, GEAR@BACKPACKER, though the true page title was Hiking Boots and other Backpacking Gear form Backpacking magazine. Quite a mouthful! The page did stray from some conventions, but not in an illogical way. The local navigation links were grouped together, but in the center of the page, not on the left. The search box was where I would expect it and the sections were tabbed like the other two sites I evaluated. It did not have a You Are Here item, though, which would have been helpful. One unique thing it included, that I liked was the date. This implied to me that the site was fairly up to date. There was also a Back to Home button, but it was buried all the way down at the bottom of the site. Not many people are going to find it.

In relation to other information...
I wanted to tie in the trunk test with some of the other standards for good web design that we've been given. I am happy to say that probably none of these sites would end up on the Daily Sucker, or the top ten Web Pages that Suck, at least not in my layman's opinion. Unlike the Pope's page, the images were kept to a minimum, though backpacker.com was somewhat noisy. None of them seemed to have the dreaded 'Mystery Meat Navigation' either.
However, I did apply rule number one of the Top Ten Mistakes in Web Design (Jakob Nielsen's Alertbox). I was disappointed to find that the Phoenix Mars Lander search box was completely unforgiving. If they really want to make the site kid and student friendly, it would serve them well to have a less literal search box, or an advanced search tool. Backpacker.com had the same flaw. Food Network was better. For example, I searcher for 'toffu', and they came up with no results, but a suggested term. (It was toffee, but you take what you can get). I also misspelled caramel (carmel) and actually came up with some hits from that. From applying these other techniques, I can see that the guidelines for user-interfaces from different sources support each other.

In closing, I would like to give my support to what our recent speaker Joseph Boudreaux said, "We are hard-wired to use tools in certain ways". The trunk test certainly addresses that hard-wired facet, and I hope that more designers will put this truth into practice. When asked for my opinion, I will certainly keep all the design guidelines in mind.

References:
Krug, Steve. (2006) Don't Make Me Think. Berkley: New Riders Publishing.

Phoenix Mars Lander. Retrieved October 19th, 2006 from http://phoenix.lpl.arizona.edu/multimedia/

Foodnetwork.com. Retrieved October 20th, 2006 from http://www.foodnetwork.com/food/entertaining

Backpacker.com. Retrieved October 20th, 2006 from http://www.backpacker.com/gear

Nielsen, J. (2004). Top Ten Mistakes in Web Design (Jakob Nielsen's Alertbox). Retrieved October 13th, 2006, from http://www.useit.com/alertbox/9605.html

Flanders, V., Dean Peters. (2002). Web Pages That Suck. Retrieved October 20th, 2006 from http://www.webpagesthatsuck.com/

XML as the building block

In order to better understand how XML is extending the initial enterprise of the four building blocks, "an Internet driven by TCP/IP that provides reliable global communication; HTTP, a simple protocol for delivering files; a tag-based language for specifying how data should be displayed; and the browser, a graphical user interface for displaying HTML data (Coyle)", one could compare it with the design goals for XML, as expressed by the W3C:
ten_commandments.jpg
The ten design goals for XML, (somewhat like Moses’ Ten Commandments), are:
1. XML shall be straightforwardly usable over the Internet.
2. XML shall support a wide variety of applications.
3. XML shall be compatible with SGML.
4. It shall be easy to write programs which process XML documents.
5. The number of optional features in XML is to be kept to the absolute minimum, ideally zero.
6. XML documents should be human-legible and reasonably clear.
7. The XML design should be prepared quickly.
8. The design of XML shall be formal and concise.
9. XML documents shall be easy to create.
10. Terseness in XML markup is of minimal importance.

So, in providing reliable global communication, how does XML help with that? According to design goal 1, XML will be usable over the internet. It does not say the internet here in the United States, but the internet in general. This means, that with an internet connection and a little XML know-how, anyone anywhere should be able to send information via TCP/IP that will be recognized by any other computer with an internet connection. It is a simplified way of sharing data across systems, particularly the internet.
As for extending HTTP into the future, I believe that XML will be used in the next version of HTTP. Since HTTP was originally designed as a way to publish and receive HTML pages, it makes sense that the next version of HTTP would be a way to universalize the sending and receiving of XML pages. In other words, maybe the future will see a XML based language for webpages that allows all information to be sent and received reliably every time. Design goals 1, 2 and 4 show how XML will account for these things.

As a tag-based markup language, XML extends the universality of HTML. As a new language, XHTML combines HTML and XML in a way that allows for automated processing, unlike HTML, which was more flexible, and therefore less standard. "XHTML consists of all the elements in HTML 4.01 combined with the syntax of XML. (Wikipedia)" The problem with HTML is that it "addresses content and structure (Rhyno 72)". For websites that have a large number of pages, XML gives these pages a uniformity of syntax that HTML was not capable of. As I read in several different places, HTML was good in the beginning, when the webpages numbered few, and could be modified individually. Now that this is no longer the case, a more standard language had to be invented. So goals 2, 3, 4, 6 and 9 help make this possible.
When researching this particular subject, I was reminded of the cascading style sheets we created for our last exercise. To me, it seems that XML or XHTML and CSS can go hand in hand in preparing for the future of the internet. XHTML will be used to create the pages, in a uniform, consistent format. CSS will be used to create the style for those pages, in a ubiquitous manner. This will in turn make the data more accessible and universal. It will be something that all browsers can read and display correctly every time.

How will XML build upon the browser? Well, as of right now, the majority of browsers support SGML, which is the parent of XML. Since XML is compatible with SGML, I believe that means that the browsers will support XML. Especially since the new dominating language appears to be XHTML, those browsers will need to be XHTML compatible, or they won't support very many pages.

XML and libraries, how it builds upon the building blocks
XML is also preparing us for the future in the way data is stored. Rhyno lists three ways in which XML is useful to libraries, it is well formed, can be validated and checked for consistency and it separates content from presentation. While I have discussed these points in some way already, it is important to see how libraries, 'information containers', will utilize XML based applications for the increasing amount of electronic data it will be storing in the future. I think one of the most important elements for this future is the DTD (document type definition) that XML adopted from SGML. As Rhyno says, "Identifying or authoring the appropriate DTD is one of the most important steps in managing a library's digital collection". DTD is used to describe a document, or a portion of it authored in DTD. This makes it easy to search for the correct type of document within the language, since the tags are universal. When searching through hundreds or thousands of documents, having a universal tag or language is invaluable.

image001.gif
Concluding Thoughts
I appreciate the concise nature of XML, though it can be unforgiving. I believe it is part of the future of the internet, and electronic communications, because of its universality. Just using it in podcasts, my first experience with it, was quite easy. I appreciate how you can just cut and paste the area you need and adapt it for future podcasts. This makes so much more sense than reinventing the wheel each time, i.e. rewriting the code. Instead, the precise language allows for an ordered way to update the podcasts, and keep them in one category. Like Dreamweaver and Nvu, the open-source editor I have used, I believe there will be an editor that allows us to create XML and XHTML just as easily. There's actually probably one out there already that I haven't seen yet!

References:
World Wide Web Consortium. (2006/09/11). Retrieved 9/25/2006 from http://www.w3.org/TR/2006/REC-xml-20060816/

Wikipedia. Retrieved 9/25/2006 from http://en.wikipedia.org/wiki/XHTML

Rhyno, Art. (2005) Introduction to XML. In N. Courtney (Ed.) Technology For The Rest of Us. (71-84). Westport: Libraries Unlimited.

W3Schools Online Web Tutorials. Retrieved 9/25/2006 from http://www.w3schools.com/xhtml/xhtml_why.asp

W3Schools Online Web Tutorials. Retrieved 9/25/2006 from
http://www.w3schools.com/xml/xml_whatis.asp

The Funnies and Core Web Technologies

I thought I'd share this recent Foxtrot cartoon (if it's too small, right click and view image):

foxtrotcartoon.gif

After doing the reading for this section, and attending Wednesday's lecture, the names that he spits out are actually familiar! The whole strip last week involved Jason programming a website for his class. The timing was perfect when I found out I had to create a webpage for our class. Mine is, of course, nowhere near as advanced as his would be, but I had fun learning what went on behind the scenes of the html editors. Creating a CSS was interesting, too. I can see how it would save time for multiple pages on a website. I hope to apply what I've learned to my ePortfolio. It would be a nice thing to have to show future employers!

Staying Healthy with Live Bookmarks

Live Bookmarks Prevent E. Coli Infection?
Though I know the odds that I would be one of the few to get E. coli from the recently tainted batch of spinach were minimal, I can't help but thank my live bookmarks for decreasing those odds. I regularly buy the Ready Pac Spinach, and actually had some in my refrigerator when I was perusing my live bookmarks. The words "e. coli" and "spinach" caught my eye in the NPR links. After reading the article I immediately threw away the rest of my spinach and kept an eye out for follow-up stories. As of right now, the organic company is refuting the claims that it came from the way they grow their spinach. They use cow manure instead of processed fertilizers and e. coli lives in the intestines of cows, which is why the FDA linked the E. Coli to this particular spinach grower. Anyway, without the live bookmarks, I would most likely have finished the spinach and could possibly have ended up quite sick.

And now, the news...
Keeping up with the current news was not something I regularly did in the past. I don't watch the local news because I find it to be too sensationalist. But with the live bookmarks I can read the headlines and keep up that way. I subscribe to three, currently, one more than originally assigned. I first chose cnn.com and BBC news. I have since dropped CNN, but now watch Headline News on TV, something I hadn't done before. I added NPR and local newspaper The Arizona Star's hourly update, to stay in touch with local news. Overall, I believe I am much more informed than I ever was. I would go to great lengths to avoid seeing the news, since I found it to be very depressing. Now, though, I think it is an advantage to be informed. I also find it nice to be able to converse about local and world news in an intelligent and informed manner.
I have certainly grown accustomed to the Live Bookmarks. When I'm on a computer at work I find myself looking for and missing them. Though we use Firefox, the computers are used by a number of people, so I try to avoid any customization. Because of that though, I do go to the websites to look for the headlines. I also find myself paying more attention to the news headlines on my email homepages, like Yahoo and MSN's Hotmail. The Yahoo news headlines are very catchy, like this story on recently reunited Holocaust survivor siblings, but unlike the television news, I can pick and choose what to read.

Conclusion
Firefox's Live Bookmarks have helped me to become current in world and local news. I feel that knowing these things helps me to become a more informed consumer, and when the next elections come around, a more informed voter. In this way then, I can contribute to society in a positive manner. I do not know if I will add anymore live bookmarks, but I am interested in what I have read for this section on aggregators. The only thing I don't like about the bookmarks is that they are not all displayed in the bar at the top. I like to be organized, and I think that the aggregator may make that possible. I am looking forward to finding out more about them.

"In 1999": Technology According to a student from 1906

In a great book I'm reading, Close To Shore by Michael Capuzzo, he includes a poem by a 1906 prep school student in order to re-create a particular time in history. What's great about it is that he uses 'Martian style' along-side 'wireless telephone' and 'self-reading books'. I feel that the tone of the poem is humorous, but how weird that he would use terms that we use today. And that those terms describe very modern technology!

In 1999

Father goes to the office
In his new bi-aeroplane
And talks by wireless telephone
To Uncle John--in Spain
Mother goes a-shopping
She buys things more or less
And has them sent home C.O.D.
Via "Monorail Express."
Sister goes a-calling
She stays here and there awhile
And discusses with her many friends
The latest Martian style
And when her calling list is through
She finds a library nook
And there with great enjoyment hears
A new self-reading book.

What is P2P, or Does Anyone Remember Napster?

P2P (peer-to-peer) Communications Model
Peer to peer is direct file sharing between two computers over the internet, who have assigned IP addresses, without downloading from a central server. In essence, in a true peer-to-peer network, the client/server roles are merged and there is no central server (Wikipedia). So each computer then acts as a server from which any other computer can download files, within the network. A little bit about networks, as Robert Molyneux explained in his essay "Computer Networks", most networks are "partial mesh". In other words, not every computer is directly linked to every other computer. The model actually looks like this:
acl.gif
That hazy cloud that is the internet is where we can focus what P2P means in terms of networking. It is almost as if the internet allows for P2P to directly connect every computer. What once seemed improbable, connecting every computer to every other computer, becomes simpler. P2P allows this connection. Because of the power and speed of home computers these days, the next evolutionary step in computer use should be the merging of client/server technology. More home computers could be used in a way that is similar to the way servers are used. This sounds fabulous, right? The sobering thought for all this peer to peer networking is security. With KaZaA, the user had the eventual problem of piggybacking. When downloading files, malicious files would piggyback onto legitimate files and your computer would be harmed in some way by this. Because the network was unstructured, there was no way of regulating this. Also, KaZaA ran a program on the computer called a satellite. This satellite often kept the internet connection open, allowing for programs to be downloaded on your computer without your knowledge. Limewire, one of the newer peer to peer applications, seems to have better security as it is firewalled, and those firewalls can be adjusted.

Bit Torrent
Perhaps the newest and 'safest' peer to peer technology is bittorrent. Bittorrent was created by programmer Bram Cohen and is now maintained by BitTorrent Inc. Bittorrent is an open source application, the code was released to the public. Any client is capable of, "preparing, requesting, and transmitting any type of computer file over a network using the BitTorrent protocol". Bittorrent is the name of a peer to peer file distribution protocol, which means it is a new rule for file-sharing. An interesting bit of information, some people record that 35% of internet traffic is through bit torrent applications. This may be high, but it is still interesting. The way bit torrent works is that the user downloads the executable bittorrent file from the home page. Then, the user searches a separate bit torrent client, like Azureus (available from sourceforge.net) or isohunt. When they've located the file they want, they click on it, and it opens up the bittorrent application. The interesting thing about bittorrent is that the more files you share, the quicker the download. This is called 'seeding'. A 'leech' is a person who downloads more than uploads. This seems to encourage some of the tenets that early web developer Tim Berner's Lee abides by, which is decentralization, openness and fairness. By encouraging people to allow uploads, there is more information for everyone out there, not one person hoarding all the files. Bittorrent is also notable because it distributes this large amount of data widely, without using up huge amounts of costly bandwith. The concern here, as with all peer-to-peer clients is authorized use. Companies encourage the use of bit torrent to download open-source software. As for music, perhaps appropriately copyrighted material found in Creative Commons could be downloaded through Bit Torrent. Apparently Warner Bros. plans to distribute some films via bittorrent.
P2P and Libraries: Past, Present and Future
Libraries were instrumental is helping to keep P2P networks available. According to ODLIS
In September 2003, the American Library Association (ALA) joined four other library associations in an amicus brief on behalf of peer-to-peer file-sharing companies Grokster and Morpheus in their defense against an infringement suit brought by MGM Studios and 27 other entertainment companies to ensure that file-sharing technology, which can be used to benefit society without infringing intellectual property rights, is not unduly restricted.
Since patrons are not typically allowed to download software onto library computers, this battle was truly to protect freedom of information outside the library, qualifying the use with the idea that peer-to-peer sharing is alright only when it does not infringe on intellectual property rights.

Libraries now recognize the advantages of P2P networking for their own use. From the Encyclopedia of Information and Library Science:
"How will P2P networking affect the delivery of reference services in and by public libraries? The effects are likely to be manifest on two levels. First, it is expected that groupware applications based in significant part on P2P architectures will emerge as important and widely employed services for computer-supported work, and one of the dimensions of computer-supported work is that reference librarians and other information specialists will be integrated into computer-supported workgroups, in a manner akin to the way in which medical librarians have been absorbed into clinical care groups at many hospitals and medical centers. The second dimension of effect will be seen in the way P2P networking enables reference librarians to deal more or less simultaneously with the needs of individuals and groups;"


Personal Experience
My experience with P2P networks are with the unstructured P2P networks. Three previously popular file-sharing networks, such as Napster, Gnutella and KaZaA were all unstructured P2P networks. Napster allowed users to download files from other users. Napster also spread the use of the compressed mp3 format, which then allowed the audio files to be downloaded more speedily. With dial-up being the most prominent connection, these compressed files were ideal. The search engine would then flood the network with the search term looking for the file. The more popular the file, the more likely the user would be to find it. Herein lies the problem with the unstructured networks. Rather than the network running through a list of files on one central server and locating it, the search goes anywhere and everywhere, sometimes timing out because there are too many files out there to find one less popular individual file. I now use Bit Torrent, it's faster and seemingly safer. There are still timeouts, and files that won't download. Such is the price we pay for free information!

What I Learned from Molyneaux and Drew

Molyneux
Molyneux's discourse on computer networks reinforced many things that I 'kind of' knew about networks, but explained them in plain terms. There were two things in particular that I felt Molyneux explained very well. One was the OSI reference model. I am more familiar with the TCP/IP model, and its four layers, Application, Transport, Network and Link. The OSI model, with seven layers is more complicated, but at the same time allows for a more comprehensive look at networking. The most interesting part of OSI model, I feel, is the physical layer. Before reading Molyneux I felt disconnected from the actual physical process of transferring data. We think of cyberspace and it seems intangible. With the OSI model, that flow of data is not intangible, rather it explains how data is transferred as signals.

"Signals are what layer 1 is about"
This second point finally cleared up the difference between analog and digital signals for me. I knew that analog signals were wave forms. Over time, they are described as having amplitude (the height of the wave), frequency (the duration from peak to peak) and phases (the length of time from zero to peak). Interesting, too, was that the initial infrastructure for data networks was done at Bell Labs. I will have to ask my Dad if he worked on that while he was there! The digital signal is different, simpler construct. The positive peaks are 1's and the negative peaks are 0's. As far as I can tell they can not be manipulated, as analog signals can to produce different sounds, because they are in their simplest form. This makes them more usable, because the sameness means as many as possible can be transmitted to many devices, as long as those devices are equipped to recieve digital signals. There is no loss of quality, merely a need to decide if something was a 1or 0, far easier than cleaning up analog signals.
As for Chapter 2's entry by Drew,
Drew
His explanation of wireless local area networks cleared up some of the mystery, I felt, surrounded them. The library I worked at in Las Vegas had recently implemented wireless access points. I wish I had had the background Drew provides to share with the patrons, about how it works, why it only works in some areas, etc. As for why it only works in several areas, I realize it is because of the available access points, explained by Drew.
"The Access Point (AP) is a transmitter/receiver that acts as a connections between wireless clients and wired networks."
Our library had only one AP, and because of that, instead of users being able to walk anywhere in the building and re-connecting to the network via the next AP, they lose their connection. However, the building is small, only one floor, so the area that is covered is fairly large. Even extending to the parking lot, according to some patrons.
Wireless standards
When we were asked to compare our computer specifications to those set forth by the College of Law, one that I didn't completely understand, but found to information about was the IEEE 802.11. As far as I can tell my computer can use all the standards, according to this article as it displays as IEEE 802.1x. I believe it means that my laptop can talk to a variety of other devices. So far that seems to be the case!

Making the Grade!

In this week’s module, Hardware, Software and Network basics, we were asked to view the computer specifications recommended by SIRLS and the College of Law. The College of Law recommends that if students are buying a computer for school, they include the specifications on their list. I am happy to say that my computer is up to their standards in every category. I initially thought that I may have had one or two minor things that were out-of-date. The guidelines were last updated August 15, 2006, and I purchased mine in December of 2005. Given the speed that technology makes advances, I was pleasantly surprised to learn that my bit of technology has kept up.
The College of Law first urges students to purchase a laptop, rather than a desktop computer. The reason behind this is, as it is for all laptop recommendations, mobility. With a laptop a student can almost take their work with them anywhere. I say almost, because we are not quite a wireless campus yet! But, Student Union access, College of Law access, and access at the IC in the main library is quite a large range of places to take your laptop. Additionally, there are several shops on University that offer free wireless, though you wouldn’t want to work on highly sensitive materials on their unsecured networks. But, the main point is that with a desktop, none of those options are available to you.
The hardware that I have is exactly as listed on the site. The only differences are that in some cases I have more than the minimum required specifications. For example, I have a 1.4 GB Mhz Intel Celeron processor, while the College of Law recommends a minimum 1.0 GB Mhz processor. I also have 33.4 gigabytes of usable space on my hard drive, slightly higher than the 30 gig, recommendation. I have an Ethernet card (this I specifically made sure of when purchasing the laptop) and an internal wireless card. The only thing I don’t have on the list is a security kit. I’m not really sure what one of those is, but since it’s on the list, I will be looking into it a bit more.
I had to work a little bit to find the specifications for my computer. They all centered on the My Computer icon on my desktop, but the information I needed was in various little pockets. I right-clicked and opened the properties window for the computer to find out the processor and RAM information, as well as double-checking that my computer is running Windows XP. To find out the hard drive capabilities, I just opened the my computer icon, and clicked on c:/ to find out what my space capabilities were. The area that contained the rest of the information was in a place I had never been too. After right-clicking on My Computer, this time I chose the option ‘Manage’. This opened the Computer Management window. There is an option on the list called ‘Device Manager’, and here is where I found the relevant information. It lists all the devices installed on the computer, and you can right-click the ones you are interested in to learn more about there properties. This was a very useful tool.
SIRLS doesn’t add too much to the College of Law’s specs, except that they feel students should definitely own their own computers, while the College feels it provides enough resources for students to not have to won their own. My personal experience is that it has been so convenient to have my own. This way I can download any necessary software, and have a reliable source of connection to the internet. That really would be what I would recommend to anyone taking a virtual course, exactly what these two pages list. Windows XP may be replaced with Vista, but the only course of action a student could take is to find out if their hardware will support this future OS. Living on-campus with a LAN network is ideal, but a high-speed modem did just as well when I was a purely virtual student. The only new pieces of software a student might want to have, that I use through campus computers is Dreamweaver and Adobe Acrobat, full-version. These seem to be useful to have, and with a student rate, it’s almost feasible!
The last thing I have to say is go Firefox, all the way. It is a far superior browser to Explorer. I could only use Explorer this summer in Ireland, and I missed my tabs. Now I would miss tabs and live bookmarks...

What Would I Ask Sir Tim Berners-Lee?

Sir Tim Berners-Lee is an interesting person, he seems uncorrupted by the temptation to make money off his invention, the internet. As one article briefly mentioned, his partner at the time became a multi-millionaire as one of the founders of the web-browser, Netscape. And yet Tim Berners-Lee still works in a small office at MIT, only recently winning the 1.3 million euro Millenium Technology prize, a modest amount of money compared to other internet moguls. From this modesty, I wonder what kind of ethos guides his life decisions.
Based on a statement made in one article he said that,
"A good blogger when he says that something's happened will have a point to back, and there's a certain ethos within the blogging community, you always point to your source, you point all the way back to the original article."

And in yet another article the term ethos was used again, that web science
"has its own ethos: decentralization to avoid social and technical bottlenecks, openness to the reuse of information in unexpected ways, and fairness."
This second view of ethos is written by Tim Berners Lee. If asking him about ethos, his moral outlook, I would like to know what, in greater detail, it might be. By being such a strong proponent of open and fair use, it may be a good guideline for librarians to use.

Internet Activities

This week's module involved Internet History, and familiarizing ourselves with some basic internet terminology. In the first exercise I located the IP address of the computer I use. It is my laptop, IP address 128.196.107.20. This search reminded me of registering my computer for campus access. To register the computer one needs to locate the MAC address which is different than the IP address. From what I understand, the MAC address is the physical address of the adapter you are using. I imagine that this is useful for internet security here on campus, because once you have the MAC address you must go through student link and officially register the address with the University. As a employee at the university library's reference desk, I try to use the term adapter address. The term MAC confuses everyone, as it is the most popular term for Apple's Macintosh computers!

The second part of this exercise was to find out where the IP address 66.253.148.213 was located. Using ARIN WHOIS, two entries came up that conatined the IP address in their ranges. One was Distributed Management Information Systems, Inc., located in Illinois. The more interesting entry is Royal Entrada Real Oeste Apartments on University. More interesting because that address is right up the street from me, which somehow makes the address more tangible.

The tracert activity was actually very frustrating for me, at first.
No matter how many times I tried, when I ran the command prompt program the hops always 'request timed out'. At 30 hops, I had hoped for more of a response than that. I wondered if it was my computer. In class I tried it on the lab computer, and it immediately came up with several IP addresses. I did not write any of those down, however, so I was unable to look them up on the ARIN database. However, today I finally recieved one IP address in response to my tracert! I used the tracert command to trace arizona.edu, instead of www.arizona.edu. I'm not sure why that worked instead. The IP address came up as 128.196.128.233 which, when searched through ARIN, appears to be the UA main address. This result was different from the Royal Entrada Apartments IP address search. Thre was no range of addresses. The following is from the result:
NetType: Direct Assignment
NameServer: ARIZONA.EDU
NameServer: NS-REMOTE.ARIZONA.EDU
NameServer: CS.ARIZONA.EDU
NameServer: PENDRAGON.CS.PURDUE.EDU
The term Direct Assignment I am interpreting as a direct connection to the server, rather than bouncing around, perhaps the connection I have goes directly to a central server? The fourth server is interesting, too. I wonder why Purdue is listed? Does UA use Purdue's servers? Cool name for the server, BTW. I wonder if there is an Arthur and Merlin server out there...

I had heard of a few network commands before, like ipconfig, ping and finger most recently. Ipconfig I used, as described before, to find my computer's IP address and MAC address. Ping we talked about in class, and I had heard the term 'pinging' other people's computers. The finger command I just recently learned about when reading about the internet and coke machines in activity 6. I put two and two together to figure out that the addresses they listed were to be 'fingered' in command prompt. I actually got a response from one by typing in the command: finger drink@drink.csh.rit.edu. The response I got was, "Welcome to the CSH drink server/ Drink User info on Drink/ Balance: 0". I can't believe they've got these machines hooked up to the internet. Laziness is the mother of invention this time.

The only netiquette violation I have ever seen is via email. My mom was just learning how to compose emails, and she sent everything in CAPS for awhile. I didn't realize how much it seemed like shouting until I got those emails from her. She is much more considerate these days ;-)

Wireless City in England

I was just looking at my live bookmarks, and the BBC reported that the city of Norwich, England
norwich.JPG
is 'one giant hotspot'. I had heard there were some cities in the U.S. that were close to this, but it is truly amazing to think that you could walk anywhere and use your laptop with its wireless card.
More than 200 antennas are positioned around the city, mainly on lampposts, creating blanket wi-fi coverage.

The city is one giant hotspot, utilising a mesh network which means users can get seamless internet access as they wander the streets.

Kurt Frary, who managed the project at the local authority said: "As a mesh network, if one of the lamppost aerials were to fail, the whole system will compensate to find a way through.

'Glitch free'

"We had 1,800 connections in the first week, more than 2,500 in the second and 3,000 in the third.

You can read the rest of the article, which includes the guidelines for use imposed by the county council, here.

These Live Bookmarks Are Great...Almost too Great!

I've had several live bookmarks for firefox for a few days now, and they are informative, keep me up-to-date, and a little bit distracting! I love being able to keep on top of people's blog posts, like Joey's-whose skype info I now have. The two news site I've got keep me very busy. I find I don't just check my multiple email accounts, d2l sites and blogs. Now I scan the list of news articles at CNN and BBC news. Because of these live bookmarks I find myself more informed about what is going on in the world. I enjoy the dichotomy that these two sometimes provide. While CNN is concerned with Karr and the JonBenet Ramsey case, the BBC has more global headlines like the recent activity in Israel.
However, I have to admit to a little voyeuristic side-trip I took from the serious news.
This case involving Natascha Kampusch, an Austrian girl who was kidnapped 8 years ago is fascinating. She escaped, and shortly after, her captor killed himself by jumping in front of a train. It appears now that Natascha may have Stockholm syndrome. So, at least in my case, it looks like sensationalist journalism can grab me no matter what country it comes from!

A Little Bit About Me...

My family moved often when I was younger and because of this I grew up in a variety of places, including Allentown, PA, Gillette, NJ and Henderson, NV.
I received a Bachelor’s degree in English from Kutztown University in Kutztown, PA. After graduating, I moved to Henderson, where I first worked as a Young People’s Library Page at the Green Valley Library and a Teller at Bank of America. When an opportunity to work full-time as a Young People’s Library Assistant came about, I didn’t hesitate to pursue the position. This job afforded me the opportunity to use my BA in English and keep up with current computer-based technology, which I have always enjoyed working with. It was immediately obvious that this was the field I belonged in and the direction my career should take. I have had several mentors in the library and they encouraged me to get my degree in Library Science. I applied for academic leave from the library district and here I am in Tucson. This is my second semester, and I hope to graduate in December. When not working or studying, I enjoy reading, spending time with close friends, hiking, kayaking and visiting the nation’s parks.
June-July of this year I was in Ireland interning at the Dublin City Centre Library. It was an amazing experience. I got to really know the people of Ireland by working with them as patrons. I also got to explore a beautiful country!
giant%27s%20causeway_rope%20bridge_dunluce%20castle%20039.jpg
(That's the Carrick-a-Rede Rope Bridge in Northern Ireland)
I have taken 4 virtual classes so far, not including several hybrid classes. I like a mixture of virtual and face-to-face, I think too much of either wouldn't work for me. D2L has been fairly easy to learn to use as a student. This semester I get to find out what it's like on the other side, as I'm a GA for one of the professors! I've got my fingers crossed that my background in computers will help me if I should have any confusion. My dad is a computer programmer/software engineer, so I've grown up with computers. He keeps current, so when I have questions, I can rely on him to answer them. He helped me pick out a Dell laptop in December, so I access the internet on that with Windows XP 2002 through the University's LAN. Currently I use Norton Anti-Virus, Trend Micro PC-cillin, and run Ad-Aware SE personal for protection.
As far as other technology, I use a cell-phone that is WAP-enabled so I can access the internet that way. I text message often, I think its interesting how much you can say in such short messages. I also have an iPod nano, which I enjoy listening to when walking around campus.
I do have another blog at blogspot. I use it primarily as an online photo album. My friends from out-of-town look at it to keep up with my year at school. It's a great way to document your life, especially when you have no desire to keep a written journal!