I agree with Shane. The various sign language transcription schemes are used in the same way as transcriptions schemes for English. SW is closer to English words; HamNoSys/Stokoe akin to the International Phonetic Alphabet used in dictionary entries. While it can be done writing an entire or lengthy discourse in IPA (HamNoSys or Stokoe) would be really tedious. SW is easier. But like written English none of our schemes adequately reflect speakers affect.

On Wed, May 26, 2010 at 10:40 PM, Shane Gilchrist <[log in to unmask]> wrote:

SignWriting was designed for people to write and read.

HamNoSys is now geared towards machine translation - they just write phonological units etc - SW can do that if needed to.

A deaf HamNoSys transcriber was saying to me that its EASY to write HamNoSys but very difficult to read. In some cases, its only the writer who can understand what he transcribed (!) (and of course the machine itself)


On 26 May 2010 23:00, Charles Butler <[log in to unmask]> wrote:
HamNoSys from my understanding, is like Stokoe, it is a linear exposition of Sign Languages, not based on their actual appearance in space, which Sign Writing does.  The only way to change minds and hearts is to show TISLR, as we are doing in October, with poster sessions and other methodologies, actual linguistic research using both databases and exposition. 
We are dealing with inertia here, and a real culture of denial that a writing system can actually work.  It will take your groundbreaking work and the work of users like Fernando Capovilla in Brazil to turn that around, and that with so many piles of literature that it cannot be ignored.
Publish, publish, publish, the overwhelming evidence will change the culture.
Charles Butler Neto
ASL and Libras user.


From: MARIA AZZOPARDI <[log in to unmask]>
To: [log in to unmask]
Sent: Wed, May 26, 2010 4:45:08 PM
Subject: Re: Data exchange with SignPuddle Markup Language

Dear Steve, Val and all the list,
I attended the LREC 2010 and I must say I was slightly disappointed at the
very low use of SignWriting in Computer Sign Language linguists. There
were some researchers that told me they considered SignWriting, but opted
for HanNoSys. It would be ideal if SignWriting were used, I thought, but I
probably can't understand the technicalities, as computers are not my
Could you explain why the situation is so.
Thank you very much,

> Hi Bill,
> In SignPuddle Markup Language, there are 3 main parts of information:
> terms, text, and source.  SignWriting can be used in each.  The voice
> language items are defined the same as sign language items.
> However, by convention, I will be using voice language items differently
> than sign language items.
> The voice language items will use UTF-8.  This will be straight
> character data, so I'm wrapping the entires as a CDATA block to avoid
> parsing.
> The sign language items will use BSW as hexadecimal.  I still need to
> decide if terms can be one than one sign.  This will determine if terms
> are edited with SignMaker or SignText.  I need to decide the same for
> the source: one sign only, or more than one sign.
> For the ultimate in flexibility, I could have the sign language items
> use UTF-8; the same as the voice language sections.  I would need to
> encode the Binary SignWriting using the UTF-8 I propose with the plane 4
> solution.  This way, we could mix sign language with HTML markup and
> other spoken languages.  However, this encoding is not approved by the
> Unicode consortium so it may be considered bad manners to start using
> plane 4 without their approval.
> Either way I go, I will not need to update the SPML DTD definition.  You
> can see that I am not limiting the terms, text, or source.
> http://www.signpuddle.net/spml.dtd
> Here's an abbreviated definition
> <!ELEMENT spml (entry+)>
> <!ELEMENT entry (item+)>
> <!ELEMENT item (term*,text?,src?)>
> <!ELEMENT term (#PCDATA)>
> <!ELEMENT text (#PCDATA)>
> <!ELEMENT src (#PCDATA)>
> + one or more
> * zero or more
> ? zero or one
> Regards,
> -Steve

Regards, Trevor.

<>< Re: deemed!