My Top Ten Internationalization Headaches and How I Fixed Them
by Andrew Kandels
Working on the CATS project has taken me from developing primarily English software into a whole new realm of excitement – internationalization and localization (i8n / L10n). Suddenly Ive got people from 120 countries (and not a handful, hundreds of paying customers!) wanting to see full support for their native tongues.
I could probably talk for hours on the enormous effort that it took to take CATS to the level of i8n support it has today; but, instead, I'm going to talk about the top 10 headaches I ran into.
Before I get started, if you're looking at adopting i8n / L10n either pre-development or on an existing project, UTF-8 is the way to go. There are alternatives; but unless you have a very heavy non-Latin based user base, stop looking. UTF-8 is backwards compatible with ASCII, it supports just about everything (and is supported by just about everything) and its the best thing since sliced bread.
1) My umlaut looks like a question mark in a fancy triangle!
This is the first step: change the encoding on all of your rendered HTML pages. Hopefully, you use a CMS or have a single header file where you can add this to the top of your pages in between the <head> tags:
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
The handful of bytes you save by loading ISO-8859-1 isnt worth it. Get on the bandwagon and start implementing UTF-8, even if you dont need it yet - there's a reason the IETF (read: Internet Police) requires all Internet Protocols support it.
Once you cover the HTML, dont forget about other content types. Make sure that your Ajax, RSS feeds and XML responses all include the UTF-8 identifiers or there will be some jumbling going on.
2) My XML or HTML doesnt validate, it says invalid entity but its using <?xml version="1.0" encoding="UTF-8"?> or includes the above <meta> tag and the entity is valid UTF-8!
Unless your DTD includes specifications for the UTF-8 entities, you're going to get yelled at during validation. The whole point of the encoding="UTF-8" is so you dont need entities. Luckily, this is an easy fix in PHP. Use the built-in html_entity_decode function to turn those entities into their actual characters:
$value = html_entity_decode(‘fancy Ü, ENT_COMPAT, ‘UTF-8′);
// returns ‘fancy Ã
Just run your string data through it prior to exporting it to your XML writer. On a side note, if you havent noticed, my examples of Unicode data almost always include one of my favorite words: umlaut.
4) When I export my data from MySQL using SELECT INTO OUTFILE, it corrupts my UTF-8 in the CSV it creates!
The output file MySQL creates is going to be in BINARY. Its NOT going to be in your character set for the table. For this reason, do not edit the CSV files created by SELECT INTO OUTFILE using a text editor. UTF-8 is variable length, so a text editor like vim may show 2 or 3 byte combinations that represent a single character. Mess with any of those characters and youll corrupt the encoding!
If you need to use LOAD DATA INFILE / SELECT INTO OUTFILE to transfer UTF-8 data, just make sure that:
1) The source and destination tables are using the same UTF-8 encoding
2) You dont mess with the CSV file in a text editor.
3) Include "CHARACTER SET UTF-8" in the LOAD DATA INFILE right after the "INTO TABLE <name>" part.
5) In SQL, my "table.column_a = table.column_b" is throwing an incompatible character set error. Why does it hate me?
MySQL stores character set and collations for each database, table and row. If no row settings exist, it falls back to the table and then database. If you're using a bad combination of character sets between two tables, string comparison may not be possible without alteration. Either alter your row/table/db to include compatible character sets/collations or alter your query like so: "CONVERT(table.column_a USING UTF-8) = CONVERT(table.column_b USING UTF-8)". I dont suggest this for a long term solution as that conversion is slow and will disable the query cache.
Also, if you're wondering what collation is and how it differs from the character set: the simplest answer is think of collation as case sensitive/insensitive. It goes beyond that, but the simplest use of using a different collation between UTF-8 would be if you dont care about the case of a column (the "username" column of a login table is a great example of insensitive, where the password column would be sensitive). Collations that end in _ci stand for "case insensitive" and those that end in "_cs" are their counter-parts. There is also "_bin" which stands for raw byte data.
6) I have a column that allows 32 characters, but my i8n data is getting trimmed to much less, 24, etc.!
UTF-8 is a variable length format. In a basic English Latin character set, the word "four" uses 4 bytes. For fancy characters like umlauts, a single character can take several bytes. When you set a column size in MySQL to 32 (i.e.: name varchar(32)), you're setting the bytes, NOT the characters. Therefore, when setting columns sizes you should generally multiply your max size by 4 (UTF-8 takes 1-4 bytes) then use PHP to truncate the character (not byte) limit before sending the query.
7) I have the UTF-8 flags set everywhere I should, but my queries still contain scrambled characters!
Ill assume when you say everywhere, you mean everywhere (in the MySQL client library, the HTML page, any Ajax URIs, etc.). This is usually because PHP 5 isnt completely UTF-8 compliant yet (see PHP 6) so several of the string functions still work on bytes and not characters (this is similar to #6).
Be careful not to use things like substr() or left() to truncate data. Any string operation that works on bytes and not characters has the potential to chop up a 4 byte UTF-8 character midstream and corrupt the character. There are functions that start with mb_ (stands for multi-byte) which you should use instead. I will note, trim() is safe!
8) I dont speak other languages and my only form of testing is copy and pasting umlauts. How can my users publish translations?
First, I recommend Gettext. There are numerous Gettext applications out there which allow Windows, Mac and Linux computers to write binary translation files. WordPress has a great implementation, copy it. Theres some nice documentation here: http://codex.wordpress.org/Translating_WordPress.
Another suggestion is that whenever you test your forms for UTF-8 compatibility, use Chinese text. If it works with Mandarin, it works with everything. Heres where you can get some sample text: http://www.lorem-ipsum.info/generator3.
I'mages: no. Use CSS to position text over the images and stop putting text in your images themselves or accept the headache of using multiple images for each language.
10) What is the biggest headache you wish you had avoided when implementing i8n in CATS?
I allowed clients to translate individual strings themselves and change any piece of text on any page that they wanted. This means maintaining thousands of copies of translation files, which are difficult to cache, and reading from them when rendering nearly every single page. If you give a client a cookie, theyll want a glass of milk. Draw a line, and support a handful of language translations only. Youll thank me later.