Yikes! Actually, its not all that bad. I am job searching right now, and the only thing that's really painful is the application packet. I get tears of joy when I see application letter and resume only. But after two weeks of searching I've had two interviews, both on the phone. We'll see how it goes.
There are so many interesting positions and places to go out there, I can only hope I get offered one of them. I think I can make a difference, and I'm so excited to try! Wish me luck...
We'll see how it goes.
In the meantime, I've been hiking. It helps put things in perspective.
Monday, January 15, 2007
Saturday, December 9, 2006
Accessibility Policy
From LVCCLD's website: "Diversity applies to more than race and ethnicity. It applies to physical disabilities, sexual orientation, age, language and social class." Though LVCCLD's policy addresses services for diverse populations, it does not directly address web accessibility. Like many districts it does not yet have any tools that would support web accessibility for patrons with disabilities. These tools are necessary, and I will explain why, as well as giving examples of what tools might be implemented.
ALA states:
In response to two of the 1990 American Disabilities Act’s (ADA) mandates; any public library must provide equal services to any person requesting them, regardless of disability and that no qualified individual with a disability shall be excluded from participation or be denied services or be subjected to discrimination.
This mandate has been in effect since 1990. I believe many public libraries have attempted to provide such services. For example, a library I worked for specified the distance between any two fixed objects must be three feet or more, to allow wheelchair access. They also had a TTY
device which is a special device that lets people who are deaf, hard of hearing, or speech-impaired use the telephone to communicate. However, with the light years of advancement made in technology since 1990, new needs must be considered as a part of providing access. The ADA is broadly enough defined that we can include web accessibility.
"I need some solid justifications if the library is to spend all this money on accessibility"
1. Why? Because the ADA instructs us to.
Making the web accessible appeals to the fundamental nature of libraries themselves. Access in order to promote intellectual freedom has always been important, and that access is inhibited for people with disabilities in a regular library setting. Even something as simple as a desktop that can elevated or lowered would make a difference to a person in a wheelchair. There are many disabilities, some obvious and some that are hidden. All of those disabilities need to be addressed. Both software and hardware is going to have an impact on accessibility.
2. It would attract people with disabilities to use the library.
In terms of the library today, many of our users are going to a library to use a computer. It is important to have wider aisles, but that only helps 20% of the disabled users in wheelchairs. If the majority of the 'abled' population wants internet access, then it is only logical that the disabled population wants access as well. These people are part of the voting population, too. When a bid comes up on the ballot, the library, by providing for people with disabilities, may have a stronger position in the community.
3. It would also position the library as an information access point for people with disabilities, their relatives, and their services providers.
In this case, the library is attracting and providing services and information for the people who have disabilities and those that are tangentially related to them. Like a good medical reference selection, information and examples of assisitive technologies can benefit the community as a whole.
4. We want to guarantee that library services are available on an equal basis to all members of the community.
Making the web accessible appeals to the fundamental nature of libraries themselves. Access in order to promote intellectual freedom has always been important, and that access is inhibited for people with disabilities in a regular library setting.
5. The benefits of accessible web design extend beyond the community of people with disabilities and an aging population.
It enables low technology to access high technology (Waddell), which is a very strong argument. Not only would these adaptive/assistive technologies help those with disabilities, but can help provide access to the least advantaged members of our society, the poor. This again appeals to our ethical duties as librarians.
How? What?
A representative from the Disability Resources Center, Dawn Hunziker, presented a variety of tools that could be utilized in libraries for increasing web accessibility. There are several categories of assistive hardware and software; text to speech programs, voice recognition, word prediction, inspiration, screen magnification and screen readers. Of these, Dawn recommended two to start out with. After doing the readings, I agree that screen magnification and screen readers would have an immediate impact and increase accessibility exponentially.
In two instances, the library does not need to risk any amount of money. There are two programs, at least, that are free. ReadPlease and Natural Reader are two of them. Libraries could ease into web accessibility with programs like these.
Another way libraries might save some money is by purchasing one or two macs. Apple has assistive technologies built in to their operating system.
Even changing the default browser might make a difference. According to WebAIM:
The focus that Mozilla Firefox places on web standards and the user experience is quickly making it a popular choice for both web developers and end users alike. Firefox is also becoming a popular browser on the accessibility front. It's Open Source nature and extensibility are allowing Firefox to be a powerful medium for increased accessibility of web content.
The library must also keep up with Section 508 of the Rehabilitation Act. It outlines accessibility related to html, java and other plug-ins. To make this easier, an interesting tool can be found at Cynthia Says. This allows you to test a website against usability standards. University of Arizona's site did well, with some warnings. I tested LVCCLD's site and was confronted by a long list of errors. A demonstration of how inaccessible a library's website is may go a long way to convincing staff and others to adopt new technologies.
Works Cited
Hensley, J. (2005) Adaptive Technologies in "Technology for the Rest of Us", Westport, CT: Libraries Unlimited.
W3C, Web Accessibility Initiative. Accessed 12/07/06. http://www.w3.org/WAI/
ASCLA, Association of Specialized and Cooperative Library Agencies. Accessed 12/07/06. http://www.ala.org/ala/ascla/asclaourassoc/guidetopoliciesb/guidepolicies.htm
Waddell, C. Applying the ADA to the Internet: A Web Accessibility Standard. Accessed 12/07/06. http://www.icdri.org/CynthiaW/applying_the_ada_to_the_internet.htm
Cynthia Says. Accessed 12/07/06. http://www.icdri.org/test_your_site_now.htm
WebAIM. Accessed 12/07/06. http://www.webaim.org
Digitiaztion Not Preservation
"Technology allows us to see things we might otherwise miss, hear things we might otherwise fail to notice, and learn from those who came before about our past, our present, and possibilities for our future."
This summer I took a class in Archives and was under the impression that digitization was a form of preservation. Because of this, I was quite surprised to read Hastings' and Lewis's assertion in Chapter 11, "Let's Get Digital", that "digitization is not preservation". I understand it now like this; that digitization is a means of providing access, but does not preserve the item itself. And those things that are 'born digital' don't need digitization, but preservation as well, so that people can access them in new formats. So the quote from the CDH is quite apropos, that technology truly allows us access to things we might never have seen. Access is one of the most important things that libraries provide. It is part of intellectual freedom which is a natural right that everyone as a rational autonomous being has. Digitization facilitates this right.
So, what institutions are practicing digitization which in turn provides better access?
Cornell University
According to the digitization blog, Cornell has apparently been a long-time partner with Microsoft and has just agreed to participate in Microsoft's Live Book Search.
The initiative will focus on works already in the public domain and allow students, researchers, and scholars to use Live Book Search to locate and read books from Cornell University Library's outstanding collections regardless of where they reside in the world. It supports both the library's long-standing commitment to make its collections broadly available and Cornell President David Skorton's goal to increase the impact of the university beyond campus boundaries.Cornell's digital library site is quite advanced. I wanted to explore what they might be offering Microsoft and came across the Edgar Allen Poe digital exhibition. They have digitized a large portion of the collection, so that you don't have to go to Cornell to see it. There are pictures, manuscripts, playbills and newspaper entries all from the late 1700s to the early 1800s. I do not know if these manuscripts will be part of what is offered to Microsoft, but they are a brilliant example of what can be done with digitization.
Online Archive of California
As part of our activities for this unit, we were to check out the OAC. I realized I have been to this site before when I did research for my archive class paper. So this time I searched for comics, with the knowledge of the institutes's goal. I was unable to locate any online, though some were before copyright date. In this instance I imagine that the person who donated the collection may have made a formal request that they be available only in person. Their mission seems to be a working realization though. The OAC looks to develop extensive finding aids, and utilize them in a single online database. The finding aids for the objects I saw were quite extensive. They listed contents by box, and sometimes by folder.
DLCMS (digital library content management system)
I found this project to be interesting because the author of the blog is developing this content in Drupal. Drupal is significant because it is the place we upload our ePortfolios to. Mark Jordan's "goal is to develop a single Drupal module, called DLCMS, that packages up the document handlers and allows implementors to create a digital collection quickly and easily without having to perform unreasonable amounts of configuration or customization." From our other readings I know that OCLC is trying to do something similar with CONTENTdm. But what I find interesting about Jordan's project is that he is using pre-existing software, another example of 'mashing-up' two things to get one superior product. This product seems very important. The fact that there is nothing out there that everyone is using creates a large void. I wonder how many other people are trying to do this. Any archival page that I've visited seems quite limited in what it does. There is no set of rules for a finding aid, unlike the creation of a MARC record, for example. Perhaps CONTENTdm will do this. But until then it is necessary to develop programs like Jordan is doing.
Digital Library Federation
The Digital Library Federation is an organization committed to preserving digital information and helping others do the same. This reflects the missions of the digitization projects I've looked at, and the software aspect as well.
The Digital Library Federation is an international association of libraries and allied institutions. Its mission is to enable new research and scholarship of its members, students, scholars, lifelong learners, and the general public by developing an international network of digital libraries. DLF relies on collaboration, the expertise of its members, and a nimble, flexible, organizational structure to fulfill its mission.For our wiki, my group was assigned the DLF. The DLF focuses on and helps support five aspects of digital libraries: digital collections, digital production, digital preservation, usage and users, and digital library architectures. I thought I would delve deeper into digital library architecture.
THe DLF utilizes FEDORA, a content management program. FEDORA is an open source software that gives organizations a flexible service-oriented architecture for managing and delivering their digital content. Not all the libraries affiliated with the DLF utilize FEDORA, but I can imagine how amazing it would be if they could. With one program, all the individual databases could conceivably be searched. While this venture does not do the digitization itself, it seems to be a support center and network for those that do.
Conclusion
This unit made me much more familiar with digitization techniques. After reviewing Cornell's digital imaging tutorial, I feel I have a better graspe of how to get the objects digitized. The tutorial was very well done, I liked the interactive questions. So after doing the tutorial, and our readings, I was able to better understand the projects that I learned about on the digitization blog. It's absolutely amazing how many of our library school classes intersect. In this blog alone I utilized knowledge from my ethics class, archives and all that we've learned in this, Introduction to Information Technology.
AZLA and Web 2.0
Tales, Tips and Tools: Google in Your Library
At the Arizona Library Association Conference, November 15th-16th, I attended several presentations that dealt with the new technologies that librarians should start utilizing. The first was "Tales, Tips and Tools: Google in Your Library" presented by Ben Bunnell, Manager-Library Partnerships at Google, who also holds an MLS. He talked about Google's advanced search feature, the book search and Google Scholar. But what I found most relevant was what he told the audience about Google Co-op. This is essentially software that someone can use to create their own database and post in on their site, or have it hosted by google. The implications that this holds for libraries are numerous. Right away I thought that librarians need to utilize this tool. I believe that we could add a link to the library homepage that looks like the google search box, but only searches websites added by the library staff. In this way it is collaborative, and could satisfy librarians and patrons alike. Librarians would know that more reputable sites are being searched, and patrons can do it in the google format they like. I think even having this as the homepage for library computers would be great. I can't wait to try it out.
He mentioned another feature of Google Co-op that could be used in libraries. "Subscribed Links allow you to add custom search results to Google search for users who trust you. You can display links to your services for your customers, provide news and status information updated in near-real-time, answer questions, calculate useful quantities, and more." This might be added to the Google pages in the library system.
Podcasting: Syndicating Your Library to the World
ASU has a Library Channel that is an excellent example of what libraries can do with technology to reach out to the community. The podcasts include an RSS feed, so that anyone can keep up to date with the podcasts and latest news. The Library Channel doesn't stop at podcasting either. They include audio tours, streaming video, and library news. I think I am a more visual learner, because while podcasts don't always catch my attention, the streaming video did. If I were to use this in a library I would do regular video podcasts, or powerpoints on subjects of interest with overlaying audio.
Overall, I came away from the conference hopeful that librarians will utilize some of this new technology. The google presentation was packed, and even though the podcasting was the last presentation of the last day, they had a good number of people who stayed for it.
The Electric Krug-Aid Acid Test
Krug's Trunk Test
I really had an enlightening experience performing the trunk test on these sites. I can only hope that more designers take these valuable guidelines into consideration. There have been numerous times that I have gone so far into a website that I haven't been able to get back. Many times it is a website that I hit upon in google, so my navigation toolbar doesn't even have the original address. It's a very frustrating experience, and I feel following these simple design rules will make that frustration disappear.
The first site I applied the trunk test was to a site of my own choosing, the Phoenix Mars Lander site, in particular the multimedia page, which I found while reading about an upcoming event here at the University of Arizona. The Phoenix Lander is a project that is taking place here at the UA. The Department of Planetary Sciences has a mock Mars landscape and a prototype lander that it will test before launching the real lander in 2007. After printing out the site I found that it did a good job of holding up to Krug's acid test. The site had a clear ID, or at least I thought so at first. There is the big splashy bar that says Phoenix Mars Lander 2007, which will also take you back to the home page if you click on it-one of Krug's recommended details. However, above that is the NASA site ID logo, that, when clicked, will take you somewhere completely different and you can only go back via the back button, no persistent navigation. When you hover the mouse over it, though, the floating text says external link. And since the NASA site ID is much smaller, it seems to be only a minor issue. I would fix that by relocating the NASA site ID to somewhere that the user wouldn't expect the main site ID to be. The sections of the site are in the 'right' places, at the top and horizontal. The local navigation is also quite clear, I believe it helps that there is not much to navigate on this particular page. Speaking of pages, the page name is also quite visible and simple. It is a page that has the multimedia links for the site, and is simply titled 'Multimedia', both at the top of the page section, and in the You Are Here indicator. The Search box is clearly placed, however, there is no 'Go' button, or any button at all. This is potentially a problem for users who don't know that pressing the 'Enter' key will activate the search. When I was working with kids at the library, the browser that the library used did not have the Go button. This was extremely puzzling for the parents as well as the kids. If they can't even get started, how will they get anywhere?
I liked the site, so I clicked around a little more, and was disappointed by one thing. The sections on the left: For Kids, Students, Educators and Media & Press were clear, I thought. However, as the two screen capture shots will show, the For Students page and Multimedia page are two different sites.
The Multimedia page is an example of most of the pages that have persistent navigation in terms of the Phoenix Mars Lander 2007 site:
The For Students links you to a completely different website:
It's informative, but this page would fail the trunk test.
Food Network
The next page I examined in regards to the trunk test was the Food Network's party ideas page, except that it was actually the "Home Entertaining - Gourmet Cooking, Wine, Spirits, Holiday Recipes & Video Tips" page, which was not specified anywhere but in the title bar. But, failing that item, it held up fairly well to the trunk test. The only confusion the page caused me was in its local navigation, which was split between the left hand side column, some navigation links in the middle next to the ad, and finally more at the bottom of the page. I think they need to group these links together in order to make the page more user-friendly. However, one thing Krug mentioned that isn't part of the acid-test per se, is utilities. The Food Network seemed to get this right, since I love a page that will offer a site map. The site map allows for easier searching if I get frustrated by the navigational tools. I liked the page though, and might just try that German Cheddar and Beer Fondue recipe!
Backpacker.com
Backpacker.com's gear site was not bad, but probably my least favorite designed sites of the ones I reviewed. my first impression was that it was very busy. Many of the test elements were there, and this was one came closest to having a visible page name, GEAR@BACKPACKER, though the true page title was Hiking Boots and other Backpacking Gear form Backpacking magazine. Quite a mouthful! The page did stray from some conventions, but not in an illogical way. The local navigation links were grouped together, but in the center of the page, not on the left. The search box was where I would expect it and the sections were tabbed like the other two sites I evaluated. It did not have a You Are Here item, though, which would have been helpful. One unique thing it included, that I liked was the date. This implied to me that the site was fairly up to date. There was also a Back to Home button, but it was buried all the way down at the bottom of the site. Not many people are going to find it.
In relation to other information...
I wanted to tie in the trunk test with some of the other standards for good web design that we've been given. I am happy to say that probably none of these sites would end up on the Daily Sucker, or the top ten Web Pages that Suck, at least not in my layman's opinion. Unlike the Pope's page, the images were kept to a minimum, though backpacker.com was somewhat noisy. None of them seemed to have the dreaded 'Mystery Meat Navigation' either.
However, I did apply rule number one of the Top Ten Mistakes in Web Design (Jakob Nielsen's Alertbox). I was disappointed to find that the Phoenix Mars Lander search box was completely unforgiving. If they really want to make the site kid and student friendly, it would serve them well to have a less literal search box, or an advanced search tool. Backpacker.com had the same flaw. Food Network was better. For example, I searcher for 'toffu', and they came up with no results, but a suggested term. (It was toffee, but you take what you can get). I also misspelled caramel (carmel) and actually came up with some hits from that. From applying these other techniques, I can see that the guidelines for user-interfaces from different sources support each other.
In closing, I would like to give my support to what our recent speaker Joseph Boudreaux said, "We are hard-wired to use tools in certain ways". The trunk test certainly addresses that hard-wired facet, and I hope that more designers will put this truth into practice. When asked for my opinion, I will certainly keep all the design guidelines in mind.
References:
Krug, Steve. (2006) Don't Make Me Think. Berkley: New Riders Publishing.
Phoenix Mars Lander. Retrieved October 19th, 2006 from http://phoenix.lpl.arizona.edu/multimedia/
Foodnetwork.com. Retrieved October 20th, 2006 from http://www.foodnetwork.com/food/entertaining
Backpacker.com. Retrieved October 20th, 2006 from http://www.backpacker.com/gear
Nielsen, J. (2004). Top Ten Mistakes in Web Design (Jakob Nielsen's Alertbox). Retrieved October 13th, 2006, from http://www.useit.com/alertbox/9605.html
Flanders, V., Dean Peters. (2002). Web Pages That Suck. Retrieved October 20th, 2006 from http://www.webpagesthatsuck.com/
I really had an enlightening experience performing the trunk test on these sites. I can only hope that more designers take these valuable guidelines into consideration. There have been numerous times that I have gone so far into a website that I haven't been able to get back. Many times it is a website that I hit upon in google, so my navigation toolbar doesn't even have the original address. It's a very frustrating experience, and I feel following these simple design rules will make that frustration disappear.
The first site I applied the trunk test was to a site of my own choosing, the Phoenix Mars Lander site, in particular the multimedia page, which I found while reading about an upcoming event here at the University of Arizona. The Phoenix Lander is a project that is taking place here at the UA. The Department of Planetary Sciences has a mock Mars landscape and a prototype lander that it will test before launching the real lander in 2007. After printing out the site I found that it did a good job of holding up to Krug's acid test. The site had a clear ID, or at least I thought so at first. There is the big splashy bar that says Phoenix Mars Lander 2007, which will also take you back to the home page if you click on it-one of Krug's recommended details. However, above that is the NASA site ID logo, that, when clicked, will take you somewhere completely different and you can only go back via the back button, no persistent navigation. When you hover the mouse over it, though, the floating text says external link. And since the NASA site ID is much smaller, it seems to be only a minor issue. I would fix that by relocating the NASA site ID to somewhere that the user wouldn't expect the main site ID to be. The sections of the site are in the 'right' places, at the top and horizontal. The local navigation is also quite clear, I believe it helps that there is not much to navigate on this particular page. Speaking of pages, the page name is also quite visible and simple. It is a page that has the multimedia links for the site, and is simply titled 'Multimedia', both at the top of the page section, and in the You Are Here indicator. The Search box is clearly placed, however, there is no 'Go' button, or any button at all. This is potentially a problem for users who don't know that pressing the 'Enter' key will activate the search. When I was working with kids at the library, the browser that the library used did not have the Go button. This was extremely puzzling for the parents as well as the kids. If they can't even get started, how will they get anywhere?
I liked the site, so I clicked around a little more, and was disappointed by one thing. The sections on the left: For Kids, Students, Educators and Media & Press were clear, I thought. However, as the two screen capture shots will show, the For Students page and Multimedia page are two different sites.
The Multimedia page is an example of most of the pages that have persistent navigation in terms of the Phoenix Mars Lander 2007 site:
The For Students links you to a completely different website:
It's informative, but this page would fail the trunk test.
Food Network
The next page I examined in regards to the trunk test was the Food Network's party ideas page, except that it was actually the "Home Entertaining - Gourmet Cooking, Wine, Spirits, Holiday Recipes & Video Tips" page, which was not specified anywhere but in the title bar. But, failing that item, it held up fairly well to the trunk test. The only confusion the page caused me was in its local navigation, which was split between the left hand side column, some navigation links in the middle next to the ad, and finally more at the bottom of the page. I think they need to group these links together in order to make the page more user-friendly. However, one thing Krug mentioned that isn't part of the acid-test per se, is utilities. The Food Network seemed to get this right, since I love a page that will offer a site map. The site map allows for easier searching if I get frustrated by the navigational tools. I liked the page though, and might just try that German Cheddar and Beer Fondue recipe!
Backpacker.com
Backpacker.com's gear site was not bad, but probably my least favorite designed sites of the ones I reviewed. my first impression was that it was very busy. Many of the test elements were there, and this was one came closest to having a visible page name, GEAR@BACKPACKER, though the true page title was Hiking Boots and other Backpacking Gear form Backpacking magazine. Quite a mouthful! The page did stray from some conventions, but not in an illogical way. The local navigation links were grouped together, but in the center of the page, not on the left. The search box was where I would expect it and the sections were tabbed like the other two sites I evaluated. It did not have a You Are Here item, though, which would have been helpful. One unique thing it included, that I liked was the date. This implied to me that the site was fairly up to date. There was also a Back to Home button, but it was buried all the way down at the bottom of the site. Not many people are going to find it.
In relation to other information...
I wanted to tie in the trunk test with some of the other standards for good web design that we've been given. I am happy to say that probably none of these sites would end up on the Daily Sucker, or the top ten Web Pages that Suck, at least not in my layman's opinion. Unlike the Pope's page, the images were kept to a minimum, though backpacker.com was somewhat noisy. None of them seemed to have the dreaded 'Mystery Meat Navigation' either.
However, I did apply rule number one of the Top Ten Mistakes in Web Design (Jakob Nielsen's Alertbox). I was disappointed to find that the Phoenix Mars Lander search box was completely unforgiving. If they really want to make the site kid and student friendly, it would serve them well to have a less literal search box, or an advanced search tool. Backpacker.com had the same flaw. Food Network was better. For example, I searcher for 'toffu', and they came up with no results, but a suggested term. (It was toffee, but you take what you can get). I also misspelled caramel (carmel) and actually came up with some hits from that. From applying these other techniques, I can see that the guidelines for user-interfaces from different sources support each other.
In closing, I would like to give my support to what our recent speaker Joseph Boudreaux said, "We are hard-wired to use tools in certain ways". The trunk test certainly addresses that hard-wired facet, and I hope that more designers will put this truth into practice. When asked for my opinion, I will certainly keep all the design guidelines in mind.
References:
Krug, Steve. (2006) Don't Make Me Think. Berkley: New Riders Publishing.
Phoenix Mars Lander. Retrieved October 19th, 2006 from http://phoenix.lpl.arizona.edu/multimedia/
Foodnetwork.com. Retrieved October 20th, 2006 from http://www.foodnetwork.com/food/entertaining
Backpacker.com. Retrieved October 20th, 2006 from http://www.backpacker.com/gear
Nielsen, J. (2004). Top Ten Mistakes in Web Design (Jakob Nielsen's Alertbox). Retrieved October 13th, 2006, from http://www.useit.com/alertbox/9605.html
Flanders, V., Dean Peters. (2002). Web Pages That Suck. Retrieved October 20th, 2006 from http://www.webpagesthatsuck.com/
XML as the building block
In order to better understand how XML is extending the initial enterprise of the four building blocks, "an Internet driven by TCP/IP that provides reliable global communication; HTTP, a simple protocol for delivering files; a tag-based language for specifying how data should be displayed; and the browser, a graphical user interface for displaying HTML data (Coyle)", one could compare it with the design goals for XML, as expressed by the W3C:
The ten design goals for XML, (somewhat like Moses’ Ten Commandments), are:
1. XML shall be straightforwardly usable over the Internet.
2. XML shall support a wide variety of applications.
3. XML shall be compatible with SGML.
4. It shall be easy to write programs which process XML documents.
5. The number of optional features in XML is to be kept to the absolute minimum, ideally zero.
6. XML documents should be human-legible and reasonably clear.
7. The XML design should be prepared quickly.
8. The design of XML shall be formal and concise.
9. XML documents shall be easy to create.
10. Terseness in XML markup is of minimal importance.
So, in providing reliable global communication, how does XML help with that? According to design goal 1, XML will be usable over the internet. It does not say the internet here in the United States, but the internet in general. This means, that with an internet connection and a little XML know-how, anyone anywhere should be able to send information via TCP/IP that will be recognized by any other computer with an internet connection. It is a simplified way of sharing data across systems, particularly the internet.
As for extending HTTP into the future, I believe that XML will be used in the next version of HTTP. Since HTTP was originally designed as a way to publish and receive HTML pages, it makes sense that the next version of HTTP would be a way to universalize the sending and receiving of XML pages. In other words, maybe the future will see a XML based language for webpages that allows all information to be sent and received reliably every time. Design goals 1, 2 and 4 show how XML will account for these things.
As a tag-based markup language, XML extends the universality of HTML. As a new language, XHTML combines HTML and XML in a way that allows for automated processing, unlike HTML, which was more flexible, and therefore less standard. "XHTML consists of all the elements in HTML 4.01 combined with the syntax of XML. (Wikipedia)" The problem with HTML is that it "addresses content and structure (Rhyno 72)". For websites that have a large number of pages, XML gives these pages a uniformity of syntax that HTML was not capable of. As I read in several different places, HTML was good in the beginning, when the webpages numbered few, and could be modified individually. Now that this is no longer the case, a more standard language had to be invented. So goals 2, 3, 4, 6 and 9 help make this possible.
When researching this particular subject, I was reminded of the cascading style sheets we created for our last exercise. To me, it seems that XML or XHTML and CSS can go hand in hand in preparing for the future of the internet. XHTML will be used to create the pages, in a uniform, consistent format. CSS will be used to create the style for those pages, in a ubiquitous manner. This will in turn make the data more accessible and universal. It will be something that all browsers can read and display correctly every time.
How will XML build upon the browser? Well, as of right now, the majority of browsers support SGML, which is the parent of XML. Since XML is compatible with SGML, I believe that means that the browsers will support XML. Especially since the new dominating language appears to be XHTML, those browsers will need to be XHTML compatible, or they won't support very many pages.
XML and libraries, how it builds upon the building blocks
XML is also preparing us for the future in the way data is stored. Rhyno lists three ways in which XML is useful to libraries, it is well formed, can be validated and checked for consistency and it separates content from presentation. While I have discussed these points in some way already, it is important to see how libraries, 'information containers', will utilize XML based applications for the increasing amount of electronic data it will be storing in the future. I think one of the most important elements for this future is the DTD (document type definition) that XML adopted from SGML. As Rhyno says, "Identifying or authoring the appropriate DTD is one of the most important steps in managing a library's digital collection". DTD is used to describe a document, or a portion of it authored in DTD. This makes it easy to search for the correct type of document within the language, since the tags are universal. When searching through hundreds or thousands of documents, having a universal tag or language is invaluable.
Concluding Thoughts
I appreciate the concise nature of XML, though it can be unforgiving. I believe it is part of the future of the internet, and electronic communications, because of its universality. Just using it in podcasts, my first experience with it, was quite easy. I appreciate how you can just cut and paste the area you need and adapt it for future podcasts. This makes so much more sense than reinventing the wheel each time, i.e. rewriting the code. Instead, the precise language allows for an ordered way to update the podcasts, and keep them in one category. Like Dreamweaver and Nvu, the open-source editor I have used, I believe there will be an editor that allows us to create XML and XHTML just as easily. There's actually probably one out there already that I haven't seen yet!
References:
World Wide Web Consortium. (2006/09/11). Retrieved 9/25/2006 from http://www.w3.org/TR/2006/REC-xml-20060816/
Wikipedia. Retrieved 9/25/2006 from http://en.wikipedia.org/wiki/XHTML
Rhyno, Art. (2005) Introduction to XML. In N. Courtney (Ed.) Technology For The Rest of Us. (71-84). Westport: Libraries Unlimited.
W3Schools Online Web Tutorials. Retrieved 9/25/2006 from http://www.w3schools.com/xhtml/xhtml_why.asp
W3Schools Online Web Tutorials. Retrieved 9/25/2006 from
http://www.w3schools.com/xml/xml_whatis.asp
The ten design goals for XML, (somewhat like Moses’ Ten Commandments), are:
1. XML shall be straightforwardly usable over the Internet.
2. XML shall support a wide variety of applications.
3. XML shall be compatible with SGML.
4. It shall be easy to write programs which process XML documents.
5. The number of optional features in XML is to be kept to the absolute minimum, ideally zero.
6. XML documents should be human-legible and reasonably clear.
7. The XML design should be prepared quickly.
8. The design of XML shall be formal and concise.
9. XML documents shall be easy to create.
10. Terseness in XML markup is of minimal importance.
So, in providing reliable global communication, how does XML help with that? According to design goal 1, XML will be usable over the internet. It does not say the internet here in the United States, but the internet in general. This means, that with an internet connection and a little XML know-how, anyone anywhere should be able to send information via TCP/IP that will be recognized by any other computer with an internet connection. It is a simplified way of sharing data across systems, particularly the internet.
As for extending HTTP into the future, I believe that XML will be used in the next version of HTTP. Since HTTP was originally designed as a way to publish and receive HTML pages, it makes sense that the next version of HTTP would be a way to universalize the sending and receiving of XML pages. In other words, maybe the future will see a XML based language for webpages that allows all information to be sent and received reliably every time. Design goals 1, 2 and 4 show how XML will account for these things.
As a tag-based markup language, XML extends the universality of HTML. As a new language, XHTML combines HTML and XML in a way that allows for automated processing, unlike HTML, which was more flexible, and therefore less standard. "XHTML consists of all the elements in HTML 4.01 combined with the syntax of XML. (Wikipedia)" The problem with HTML is that it "addresses content and structure (Rhyno 72)". For websites that have a large number of pages, XML gives these pages a uniformity of syntax that HTML was not capable of. As I read in several different places, HTML was good in the beginning, when the webpages numbered few, and could be modified individually. Now that this is no longer the case, a more standard language had to be invented. So goals 2, 3, 4, 6 and 9 help make this possible.
When researching this particular subject, I was reminded of the cascading style sheets we created for our last exercise. To me, it seems that XML or XHTML and CSS can go hand in hand in preparing for the future of the internet. XHTML will be used to create the pages, in a uniform, consistent format. CSS will be used to create the style for those pages, in a ubiquitous manner. This will in turn make the data more accessible and universal. It will be something that all browsers can read and display correctly every time.
How will XML build upon the browser? Well, as of right now, the majority of browsers support SGML, which is the parent of XML. Since XML is compatible with SGML, I believe that means that the browsers will support XML. Especially since the new dominating language appears to be XHTML, those browsers will need to be XHTML compatible, or they won't support very many pages.
XML and libraries, how it builds upon the building blocks
XML is also preparing us for the future in the way data is stored. Rhyno lists three ways in which XML is useful to libraries, it is well formed, can be validated and checked for consistency and it separates content from presentation. While I have discussed these points in some way already, it is important to see how libraries, 'information containers', will utilize XML based applications for the increasing amount of electronic data it will be storing in the future. I think one of the most important elements for this future is the DTD (document type definition) that XML adopted from SGML. As Rhyno says, "Identifying or authoring the appropriate DTD is one of the most important steps in managing a library's digital collection". DTD is used to describe a document, or a portion of it authored in DTD. This makes it easy to search for the correct type of document within the language, since the tags are universal. When searching through hundreds or thousands of documents, having a universal tag or language is invaluable.
Concluding Thoughts
I appreciate the concise nature of XML, though it can be unforgiving. I believe it is part of the future of the internet, and electronic communications, because of its universality. Just using it in podcasts, my first experience with it, was quite easy. I appreciate how you can just cut and paste the area you need and adapt it for future podcasts. This makes so much more sense than reinventing the wheel each time, i.e. rewriting the code. Instead, the precise language allows for an ordered way to update the podcasts, and keep them in one category. Like Dreamweaver and Nvu, the open-source editor I have used, I believe there will be an editor that allows us to create XML and XHTML just as easily. There's actually probably one out there already that I haven't seen yet!
References:
World Wide Web Consortium. (2006/09/11). Retrieved 9/25/2006 from http://www.w3.org/TR/2006/REC-xml-20060816/
Wikipedia. Retrieved 9/25/2006 from http://en.wikipedia.org/wiki/XHTML
Rhyno, Art. (2005) Introduction to XML. In N. Courtney (Ed.) Technology For The Rest of Us. (71-84). Westport: Libraries Unlimited.
W3Schools Online Web Tutorials. Retrieved 9/25/2006 from http://www.w3schools.com/xhtml/xhtml_why.asp
W3Schools Online Web Tutorials. Retrieved 9/25/2006 from
http://www.w3schools.com/xml/xml_whatis.asp
The Funnies and Core Web Technologies
I thought I'd share this recent Foxtrot cartoon (if it's too small, right click and view image):
After doing the reading for this section, and attending Wednesday's lecture, the names that he spits out are actually familiar! The whole strip last week involved Jason programming a website for his class. The timing was perfect when I found out I had to create a webpage for our class. Mine is, of course, nowhere near as advanced as his would be, but I had fun learning what went on behind the scenes of the html editors. Creating a CSS was interesting, too. I can see how it would save time for multiple pages on a website. I hope to apply what I've learned to my ePortfolio. It would be a nice thing to have to show future employers!
After doing the reading for this section, and attending Wednesday's lecture, the names that he spits out are actually familiar! The whole strip last week involved Jason programming a website for his class. The timing was perfect when I found out I had to create a webpage for our class. Mine is, of course, nowhere near as advanced as his would be, but I had fun learning what went on behind the scenes of the html editors. Creating a CSS was interesting, too. I can see how it would save time for multiple pages on a website. I hope to apply what I've learned to my ePortfolio. It would be a nice thing to have to show future employers!
Subscribe to:
Posts (Atom)