Traditionally, computers have used ASCII, a set of 127 characters, as a result of English and American heritage. Every character can be represented with a single byte. Eventually, different countries came up with their own encodings, using the same bytes to represent different characters. For example, in a common "Western" encoding, the byte 0xFD means a Y with an acute accent, while in a common Turkish encoding, the byte 0xFD means a dotless i. And this is just for Latin-style encodings, without the thousands of characters needed by Asian languages.
With computers taking over the world, something called "Unicode" was developed that (attempts to) assign a unique number to most separate characters in most languages. By convention the number is it written in hexadecimal with “U+” prepended; this is called a codepoint. So Latin y with acute “ý” has the codepoint U+00FD, while a Latin dotless i “ı” has the codepoint U+0131. UTF-8 (see the utf-8(7) ManPage) is a method of encoding codepoints in a backwards-compatible way with Legacy systems that use ASCII or Latin characters. The first 256 Unicode characters are identical to Western Latin, of which the first 127 are identical to ASCII. All ASCII characters are represented exactly the same way in UTF-8. See the UTF-8 FAQ.
This is a front end to the iconv(3) library (libiconv) that many recent programs use for handling character encoding and conversion.
$ perl -e 'print chr(195) . chr(137) . "\n"' É
Copy the following text from this page and paste it into your terminal:
echo Árvíztűrő tükörfúrógép
If everything is working, you should see it both on the shell's input line and in the xterm's output. If it doesn't work, then the problem might be with the terminal, with the locale, or the lack of a fixed font that has those characters.
Some shells (notably zsh(1)) can't cope with it (and gets confused if you start moving the cursor over the text), although xterm will still print the output fine. Bash copes with it just fine too.
To turn on UTF-8 support in xterm (it must have been compiled with utf-8 support, xterm version 145 or later), you must invoke xterm with the “-u8” option.
To turn on UTF-8 support in gnome-terminal, you print a certain escape sequence to the terminal: “/bin/echo -ne '\033%G'”
You will also need an X11 font that has the unicode characters you want to display. However, if your distribution comes with utf-8 enabled terminals, then it will almost certainly come with a decent default font. Try “xlsfonts | grep iso10646” to see unicode fonts you have access to. You should see some listed for "misc-fixed", which is the default font used by terminals.
You should change that to end with "-iso10646-1" instead, if you have a unicode version of the font installed. If you don't have administrator rights, you can always make your own alias file, eg put
xset +fp $HOME/.fonts/fonts.alias
Now any new xterms should be able to display more non-ASCII characters.
from a command line. (It will take effect for new xterms).
Also, you need to re-map the Alt key to be Meta. Add “
keysym Alt_L = Meta_L
to your ~/.Xmodmap file (which should be sourced on login), or run
xmodmap -e 'keysym Alt_L = Meta_L'
The program uxterm is a shell script wrapper that sets up the locale properly then runs xterm with the right parameters.
There is a unicode enabled version of rxvt, uxrvt.
urxvt -fn "xft:Bitstream Vera Sans Mono:pixelsize=16"
(This requires your system to have the correct support for this locale; if it doesn't then the administrator can add "en_NZ.UTF-8 UTF-8" to /etc/locale.gen and run locale-gen(8), which is in the "locales" package.)
into /etc/environment (create it if it doesn't already exist - Note that this file might possibly be Debian-specific).
As well as getting utf-8 support, this has the added advantage that locale-aware applications will use the correct currency symbol, unit separator, date formatting etc for your locale. (Eg, MozillaMail will show dates as dd/mm/yyyy instead of the default US mm/dd/yyyy)
If you don't have a friendly administrator or can't otherwise get root permissions, you should still be able to generate a locale yourself if it isn't already installed:
mkdir -p ~/pkg/locale/ && localedef -f UTF-8 -i en_NZ ~/pkg/locale/en_NZ.UTF-8
echo 'export LOCPATH=~/pkg/locale' >> ~/.bashrc export 'export LC_ALL=en_NZ.UTF-8' >> ~/.bashrc
This is not absolutely necessary -- you can give less the "-r" option to display raw characters, instead of octal codes. Or once you a viewing a file in less, you can type "-" then "r" to toggle this display on and off. If you have the environment variable set, then you can't toggle it. (Sometimes it is useful to see the raw utf-8 codes, for development purposes).
perl 5.8 has significantly improved unicode/utf-8 handling over earlier versions. See the perllocale(1) and perlunicode(1) man pages. Once set to use unicode, commands like lc/uc (lower/upper case) and RegularExpression character classes (space/printable/upper/lower etc) will work as you'd expect.
to your script to change the default string encoding.
to your $HOME/.emacs file.
See the VimNotes page.
Mozilla has great charsets support, being so new. Netscape >= 4.05 has some support, but does have troubles. Mutt can do utf-8, but I haven't been able to get it to show the headers summary correctly. I don't know about kmail, balsa, or evolution, but my guess is that they are new enough to have good support.
The easiest thing I've found to do is to get some of the excellent Microsoft true type fonts working under linux as they have put quite a bit of work into internationalisation and fonts. If they aren't installed system wide, you can install them into $HOME/.fonts and programs using fontconfig (most modern graphical programs) will automatically find them. At the very least, "Courier new" and "Times New Roman" are good TTF fonts to use. I personally also like "Verdana" as a sans-serif font.
I copied a bunch of files with "non-printable" UTF-8 characters (Árvíztűrő tükörfúrógép etc) from a Samba share, using cygwin's rsync under Windows, to a vfat drive. Somewhere along the way, the encoding got changed from UTF-8 and the end result was that my ő's changed to bad squiggles or question marks, depending on what program you lookd at them with.
The convmv utility lets you do a bulk conversion of character sets in file names:
./convmv -r -f latin1 -t utf8 --notest /array/images/mp3/albums/*
fixed my problem. Thanks to the Unicode/charsets section of the Samba HOWTO.
smbmount //servername/sharename /mnt/point -o codepage=cp850,iocharset=utf8,password=$pass
lib/main.php:944: Notice: PageInfo: Cannot find action page