четверг, 23 сентября 2010 г.

Google technologist derides Oracle's lack of developer focus

When it bowed out of the JavaOne conference this year, Google cited Oracle's lawsuit over Java use in Google Android. But one Google technologist suggests a second possible reason for Google's reticence: Oracle's lack of focus on developers.
In a blog entry posted Monday, Tim Bray, a Google developer advocate widely known as being one of the inventors of XML, recounted a conversation he had with someone "familiar" with how Oracle runs its OpenWorld conference, alongside which JavaOne will be held this year. Bray asked why the company didn't focus more on developers at this event. The individual responded that, for Oracle, building rapport with developers was not its chief priority.
"The central relationship between Oracle and its customers is a business relationship, between an Oracle business expert and a customer business leader ... The concerns of developers are just not material at the level of that conversation; in fact, they're apt to be dangerous distractions," Bray quoted the unnamed individual.
Although a short post, it does provide a glimpse into how priorities differedbetween Oracle and Sun Microsystems, which Oracle purchased in January.
Oracle executives have expressed enthusiasm for supporting the development communities around some widely used Sun technologiessuch as MySQL and Java. But other less successful or harder-to-commercialize projects -- such as OpenSolaris,
OpenOffice, and OpenSSO -- have seemingly been neglected or even abandoned by the company.
A number of reader-contributed comments on the post noted that Oracle's focus on the business side of technology may not necessarily be counted as a negative for the company, especially when compared to the developer-focused ways of the less successful Sun.
"How is this a bad thing? It's all about building the best applications for your customers," one poster noted. "Imagine if airlines treated their relationship with the flier as the most important. Imagine if politicians treated their relationship with constituents as most important."
Bray was a Sun Microsystems chief technologist who resigned from Oracle shortly after its purchase of Sun. Bray posted the comment on his personal blog site, where he stresses the opinions he expresses are not Google's.
Oracle did not immediately respond to a request for comment.

Oracle silent on Java independence initiative

While Java founder James Gosling has campaigned for Oracle to place Java under the jurisdiction of an independent foundation, Oracle is declining to comment at all on the notion.
Asked about Gosling's efforts during a press question-and-answer session at the Oracle OpenWorld conference Tuesday in San Francisco, Oracle's Thomas Kurian, executive vice president of product development, simply declined to comment.
[ Kurian and Oracle detailed Java ambitions on Monday evening. | Keep up with app dev issues and trends with InfoWorld's Fatal Exception blogand Developer World newsletter. ]
"I will not talk about that," Kurian said.
Gosling has sought to hold Oracle's feet to the fire on an effort the company supported in 2007 to have the Java Community Process become an independent, vendor-neutral standards organization. That was before Oracle bought Sun Microsystems, which had jurisdiction over Java at the time.  Oracle completed its Sun acquisition in January.
Kurian did, however, clarify Oracle's position on the fate of JavaFX Mobile, the mobile device variant of the JavaFX rich Internet application platform founded by Sun. An Oracle official described JavaFX Mobile as being on hold Monday, but Kurian said JavaFX Mobile will not run on the CLDC (Connected Limited Device Configuration) lightweight Java Virtual Machine, but will run on other virtual machines.
Kurian touted Java capabilities and ambitions for mobile devices, stressing there are 31 times more Java-enabled mobile phones shipping every year than Apple iPhone and Google Android combined.
"I would not underestimate our capability [of] delivering a new Java platform" in this space, Kurian said.
Kurian also pledged continued support of the NetBeans open source IDE Oracle inherited from Sun.
Also, John Fowler, Oracle executive vice president of systems, said the final version of the Solaris 11 Unix OS is due next year. Oracle's Cloud Office collaborative application suite, meanwhile, is nearing a milestone. The suite is for the Web and mobile devices.
"We're  right on the edge of having a preview for it," said Edward Screven, Oracle chief corporate architect.
This article, "Oracle silent on Java independence initiative," was originally published at InfoWorld.com. Follow the latest developments in business technology news and get a digest of the key stories each day in theInfoWorld Daily newsletter.
Read more about developer world in InfoWorld's Developer World Channel.

среда, 22 сентября 2010 г.

Hibernate 3.6.0.Beta4 release

Hibernate 3.6.0.Beta4 has been released incorporating mostly minor bugfixes and improvements. Most of the work this cycle went into the improved documentation. For those not aware we are planning on splitting the documentation into 2 books:
  1. Getting Started Guide, see HHH-5441 : this is a collection of tutorials and information on the Hibernate community, etc.
  2. Developer Guide, see HHH-5466 : this is essentially the information from the existing manual, but presented in a more topical fashion.
The Getting Started Guide is mostly done. There is a single subtask outstanding to incorporate a tutorial on basic Envers usage, but it already contains tutorials on basic Hibernate using (both with hbm.xml and annotation usage) as well as a basic JPA usage tutorial. They all build on the same schema and domain classes, in hopes it will be useful illustrating how to move from one paradigm to another. In fact they all perform the exact same steps for illustration (except for the Envers tutorial when it gets done, since it need to present a very different use case to usefully show Envers usage).
We also are trying out actually bundling up the tutorials in a working project this time (a maven mutli-module project) to make it even easier to get up and running with the tutorials. We are still working through the details of hosting that in terms of referencing the zip from the tutorials (thats the problem with modularizing stuff). Anyway, in the interm I thought this one was close enough that I went ahead and made it available from http://dl.dropbox.com/u/3636512/getting-started-guide/index.html. Some notes:
  • This url is only made available temporarily
  • The documentation references a link to obtain the code. That link is not accurate. We are still deciding where these will live and how they will be referenced. In the meantime I have zipped up the code and made it available here: http://dl.dropbox.com/u/3636512/getting-started-guide/tutorials.tar.gz (again temporarily).
Please report any issues to JIRA. Visit us on IRC or the forums if you have usage questions.

source: hibernate.org

понедельник, 5 апреля 2010 г.

Quick Tip: An Introduction to jQuery Templating

JavaScript Templating is a neat idea: it allows you to easily convert JSON to HTML without having to parse it. At Microsoft’s MIX10 conference, they announced that they are starting to contribute to the jQuery team. One of their efforts is to provide atemplating plugin. In this quick tip, I’ll show you how to use it!
You’ll need the data to template; you’ll likely retrieve JSON from your server; of course, Object / Array literals work just as well, so that’s what we use:
  1. var data = [
  2. { name : "John",  age : 25 },
  3. { name : "Jane",  age : 49 },
  4. { name : "Jim",   age : 31 },
  5. { name : "Julie", age : 39 },
  6. { name : "Joe",   age : 19 },
  7. { name : "Jack",  age : 48 }
  8. ];
var data = [
  { name : "John",  age : 25 },
  { name : "Jane",  age : 49 },
  { name : "Jim",   age : 31 },
  { name : "Julie", age : 39 },
  { name : "Joe",   age : 19 },
  { name : "Jack",  age : 48 }
 ];
The template is written in tags; for each item in your JSON, the template will render the HTML; then, it will return the entire HTML fragment to you. We can get to the JavaScript values from within the template by using {% and %} as tags. We can also execute regular JavaScript within these tags. Here’s our template:
  1. <li>
  2. <span>{%= $i + 1 %}span>
  3. <p><strong>Name: strong> {%= name %}p>
  4. {% if ($context.options.showAge) {  %}
  5. <p><strong>Age: strong> {%= age %}p>
  6. {% } %}
  7. li>
  • {%= $i + 1 %} Name: {%= name %}
    {% if ($context.options.showAge) { %} Age: {%= age %}
    {% } %}
  • To render the data with the template, call the plugin; pass the data to the plugin method; you can optionally pass in an options object as well. (These aren’t predefined options; they’re values you want to use within the template, perhaps for branching.)
    1. $("#listTemplate").render(data, { showAge : true }).appendTo("ul");
    $("#listTemplate").render(data, { showAge : true }).appendTo("ul");
    
    Source: http://net.tutsplus.com

    What Is the Top Mobile Platform for Open Source Developers?

    Mobile platforms like Apple's iPhone and Google's Android have become a key focus for open source developers. And the trend is only increasing, though new research has found that over the course of the last year, there has been a shift in which mobile platform has the most open source development activity.
    A new study by Black Duck Software found that at the end of 2009, there were 224 new open source software projects on Google's Android operating system, bringing its total to 357 open source projects in all. That's enough to leapfrog Apple's (NASDAQ: AAPL) iPhone to take the top spot in the number of open source projects being developed on either platform.
    Android's 2009 open source software tally represents a 168 percent gain over the number of projects reported for it in 2008.
    In contrast, Apple's iPhone garnered 76 new open source software project in 2009, representing 43 percent growth over 2008. In total, Black Duck reported that at the end of 2009 there were 252 open source software projects for Apple's iPhone.

    Overall, 2009 was a year of expansion for open source development across the whole of the mobile realm.
    "We're seeing robust growth in open source projects for mobile platforms, with 39 percent growth in the number of projects that are available [and] over 3,200 available now," Peter Vescuso, executive vice president of marketing and business development at Black Duck Software, told InternetNews.com. "The dynamics within that are pretty interesting with Android representing really the bulk of the growth with 224 projects, which is about 25 percent of all new projects."
    Coming in third place behind Android and iPhone is Windows Mobile with 75 new projects in 2009, raising its total to 248 open source software projects in all. According to a recent study,Windows Mobile has been losing share in terms of adoption over the course of the last year.
    Black Duck's data comes from its knowledge base, which is produced by a team of people referred to as "spiders" that scour every known open source repository to collect data. The Black Duck knowledge base is also at the heart ofBlack Duck's business, which aims to provide users with information on code licensing and related issues.
    In terms of methodology, Black Duck's report isn't limited to looking at only full-fledged, downloadable applications that reside in an app store.
    "Not all of these projects are applications. Some are but there also libraries, widgets and frameworks," Vescuso said. "None of these projects come from app stores -- they come from the project repositories. We're going right to the source. We're not going to the stores where these projects might be available for download."
    Vescuso noted that Android likely gained in popularity in 2009 thanks in part to its open source nature and the fact it has the support of a wide range of handset vendors and operators.
    Still, Vescuso added that open source and the iPhone also have an important relationship.
    "Even though you think of the iPhone as a closed, proprietary platform, it is significantly built on open source," Vescuso said. "It heavily leverages open source and so it benefits from open source. In an odd way ... as the iPhone succeeds, to a large extent, open source is succeeding -- even though iPhone is not an open platform."

    Source: http://www.developer.com

    10 Cool Firefox Add-Ons

    This is a must-read for Firefox fans! We'll review 10 cool add-ons that will make your cross-platform Mozilla web browser even better. We'll review add-ons to help fix annoyances, save time, discover advanced functionality, and stay connected. Lets get started!

    #1 NewTabURL to control your new tab page

    One of my pet peeves about tabbed web browsers is the blank tab page. More specifically, it annoys me when I click to open another tab and I get a "New Tab" page, or completely blank page. I'm a Google fan; I want Google to load. I just don't understand why Microsoft and Mozilla won't make the default setting load your homepage for new tabs!
    If you're the same way, you may have already checked (and double-checked) the tab settings in Firefox and found no homepage setting for new tabs. However, there are add-ons that will help; for instance, NewTabURL. This add-on lets you choose the URL for new tabs: blank page, home page, current page, or specific URL.
    NewTabURL also gives you another feature that automatically loads URLs from the clipboard. For example, you can copy a website address from a document, browser, or anywhere and when you open another tab in Firefox, the copied URL will automatically load in the new tab.

    #2 iMacros for Firefox for automating browser tasks and tests

    This is a very interesting add-on, giving you the ability to record and play macros in Firefox. Pretty much anything repetitive you do in Firefox you can automate with iMacros. You can teach it to fill out forms or download and upload files. It can import or export data to and from CSV or XML files or databases. It even includes support for working with PDF files, capturing screenshots, user agent simulation, and proxies.
    iMacros for Firefox also includes a password manager. These passwords can be used within macros. Plus they can be secured with 256-Bit AES encryption

    #3 Web Developer to design, test, and troubleshoot sites and applications

    This add-on is great for anyone that designs or maintains websites or web applications. It gives you a new Firefox menu and toolbar with various web developer tools. Use it to test, inspect, or troubleshoot cookies, forms, images, and many other web components.
    It gives you control over client-side settings by letting you easily toggle Java, JavaScript, cache, cookies, pop-blocker, and other features on and off. You can view CSS details and even edit the style sheets to see live results. It includes many inspection and manipulation features for forms and images. It also features code validaters and many other miscellaneous tools.

    #4 Yoono for keeping tabs on your social networking and IM friends

    Yoono is a must-have add-on for anyone that communicates via social networks and/or instant messaging services. It can serve as a single spot to check your social networking feeds and update your status for all networks at once. If you use multiple sites or services, this add-on can save you a lot of time.

    #5 Gmail Manager for quick and easy Gmail access

    If Google's Gmail is your email provider of choice, you ought to check out the Gmail Manager add-on. It gives you a icon in the status bar of Firefox, loaded with shortcuts to create and checks messages among other tasks. You'll be notified of incoming messages. It even detects email links and can bring up Gmail when clicking on mailto links. Best of all, Gmail Manager supports multiple accounts.
    Source: http://www.linuxplanet.com

    The New Open Source Business Model Still Relies on Closed Source

    Over the last couple of years a number of different open source business strategies have evolved. According to the 451 Group, it's an evolution that includes the broader adoption and usage of open source overall by both open source and proprietary software vendors.
    Back in 2008, the 451 Group put out a landmark report on open source business strategies. According to 451 Group analyst Matt Aslett there has been some change since then. Among the changes is a decline in the dual-licensing strategy that was once a popular business strategy for vendors aiming to profit from their open source technologies.
    "I didn't expect it to be significant but when I looked at the vendors we analyzed in 2008 16 percent were using dual-licensing strategy," Aslett, analyst at the 451 Group told InternetNews.com. "In 2010, it's just 5 percent of the same 114 vendors -- that really bears out the fact that there has been a shift away from dual licensing ."
    Dual-licensing is an approach whereby the vendor provides their software under both an open source and a commercial license. It's an approach that was popularized by open source database vendor MySQL. Instead of going the dual-licensing route as a business strategy, other models and approaches have emerged to take its place.

    The Open Core model

    "We've seen a few of the dual-licensed vendors that have dropped the commercial version and have gone the pure open source approach and just relying on support and services revenue," Aslett said. "We've seen a lot more move to the Open Core model. We saw that grow from 24 percent of vendors in 2008 to 30 percent today."
    Aslett defines the Open Core as one where there is a core open source project for which the vendor supplies proprietary extensions. While the dual-licensing is different from open core, there is at least one key similarity.
    "At the end of the day both open core and dual-licensing involve commercially licensed proprietary software," Aslett said. "Both strategies enable a vendor to have some control over the commercial aspects of the business strategy and the enterprise version."
    Overall, Aslett noted that much has changed in open source usage since his 2008 report such that it is now more difficult to actually isolate and identify all of the vendors that have an open source business strategy. More vendors than ever are now using open source at different points in their process and applications and usage is not limited to pure-play open source vendors.
    "If you look know and see how a company like IBM, Oracle or SAP or Microsoft are making money from open source it's not in the way that we've traditionally seen open source specialists make money," Aslett said. "It's though complementary products and services. So those are very different strategies that don't necessarily focus on commercializing the open source software directly."
    For those vendors that are open source specialists, Aslett sees the big challenge as figuring out how to convert community users into paying users. He noted that it's a balancing act for many vendors as not every user wants or needs to be sold on additional services.
    "I think that a lot of vendors have gotten better at realizing to not try and convert all community users as it actually could have a detrimental effect on the image of the company," Aslett said. "So they've been a lot more clever about the techniques they use to make sure they capture users when they're at the point that they want to engage in a subscription or get a commercially licensed extension."
    Source: http://www.linuxplanet.com

    MySQL Prepared Statements to Generate Crosstab SQL

    MySQL Reporting requirements sometimes require both unknown column and row values, necessitating a more powerful means of generating crosstabs. Today's article presents Prepared Statements, which dynamically generate the SQL and assign it to a variable so that we can tailor the output based on the number of data values.
    During the past several weeks, we've been learning how to create crosstabs in MySQL. We've now covered the basics of fetching the row and column data, as well as how to overcome some of the challenges inherent to outputting data as columns. As we saw, organizing data into columns can be an arduous task due to the SQL language's natural tendency of appending data to rows. We can transpose row data to columns easily enough, but the number of possible data values in the horizontal axis need to be known before hand.
    Unfortunately, there will be times that your reporting requirements will require both unknown column and row values, or have a tendency to change often enough to invalidate previous code. In such instances, you need a more powerful means of generating crosstabs. Today's article presents just such a mechanism: Prepared Statements. By dynamically generating the SQL and assigning it to a variable, we can tailor the output based on the number of data values, thus unburdening us from having to anticipate changes.


    How Static Is the Data Really?

    Way back in the Tips for Simplifying Crosstab Query Statements article, we were introduced to a crosstab report that displayed the number of cases by Region broken down by month, and later, by year as well. No sooner was the article published than someone asked the question "What happens when a new Region is introduced?" The answer is simple: an extra column must be added to the SELECT field list. This is straightforward enough to do and can be expected to occur very rarely, as Regions are what you would call a static data set. Other than Regions, other geographic entities, including continents, countries, counties, provinces, states, and cities can also be considered to be static. Having said that, even fixed data sets such as time frames can vary enormously. Now I'm not referring to the elasticity of time-space, as discovered by Einstein, but rather, how the start and end points of a SELECT query depend on reporting needs. With regards to our own crosstab query, going from a single year to multiple ones necessitated many changes to the SQL.
    At the other end of the spectrum is variable data, which can change drastically from one report to the other. Imagine reporting on Starbucks coffee houses in the nineties boom period! Since you could expect the number of shops to increase on an almost daily basis, you’d definitely need a more flexible approach!

    Steps in Converting the Query into a Prepared Statement

    Going from an SQL statement to a Prepared Statement will be done in two steps:
    • First, we'll rewrite the query to generate the Prepared Statement whose output will vary according to the number of columns.
    • Second, we'll insert the SQL generating Prepared Statement into a stored proc, so that we can create the Prepared Statement and execute it in one fell swoop.

    Dynamically Generating the SQL Statement

    In order to dynamically generate an SQL string, we’ll be using the CONCAT() and GROUP_CONCAT() string functions.
    The CONCAT() function accepts a variable number of string parameters, and returns another string, which is comprised of all the input parameters joined together in the order that they were passed in. The following code would concatenate a name in last name (comma) first name format:
    SELECT CONCAT(last_name, ", ", first_name) AS NAME
    FROM   CLIENTS;
    Produces:
    NAME
    _____________
    Jones, Paul
    McDonald, Jim
    Miller, Bruce
    Portman, Bess
    The GROUP_CONCAT() function returns a string result with the concatenated non-NULL values from a group. Here, it's used to aggregate all the rows from the TA_CASES table and return the collection of SELECT list expressions that makes up the horizontal axis of the crosstab. The following query returns a string value that replaces the SQL statement of our previous crosstab query:
    SELECT concat(
        "SELECT CASE WHEN Month_Num IS NULL", "\n", 
        "            THEN 'TOTAL'", "\n", 
        "            ELSE Month", "\n", 
        "       END        AS 'Month',", "\n",
        group_concat( DISTINCT concat("       REGION_", REGION_CODE, 
                                      "  AS 'REGION ", REGION_CODE, "',", "\n"
                               )
                      order by REGION_CODE
                      separator '' 
                    ),
        "       TOTAL", "\n",
        "FROM  (     SELECT   MONTH(CREATION_DATE)\t\t\t\t\t\t\t\tAS Month_Num,", "\n",
        "\t\tMONTHNAME(CREATION_DATE)\t\t\t\t\t\t\t\t\tAS 'Month',", "\n",  
        group_concat( 
            DISTINCT concat("\t\t\t\tCOUNT(CASE WHEN REGION_CODE ='", REGION_CODE, 
                            "' THEN FEE_NUMBER ELSE NULL END) AS 'REGION_", 
                            REGION_CODE, "',", "\n"
                     )
            order by REGION_CODE
            separator '' 
        ),
        "            COUNT(*)\t\t\t\t\t\t\t\t\t\t\t\t\tAS 'TOTAL'", "\n",
        "            FROM  TA_CASES", "\n",
        "            WHERE YEAR(CREATION_DATE)=", YEAR(CREATION_DATE), "\n",
        "            GROUP BY Month_Num WITH ROLLUP) AS CA;"
    ) statement
    FROM TA_CASES
    WHERE YEAR(CREATION_DATE)=1998;
    Here is the resulting SQL code as created by our dynamic SQL generator:
    SELECT CASE WHEN Month_Num IS NULL
                THEN 'TOTAL'
                ELSE Month
           END        AS 'Month',
           REGION_01  AS 'REGION 01',
           REGION_02  AS 'REGION 02',
           REGION_03  AS 'REGION 03',
           REGION_04  AS 'REGION 04',
           REGION_05  AS 'REGION 05',
           TOTAL
    FROM  (SELECT    MONTH(CREATION_DATE)
             MONTHNAME(CREATION_DATE)  
                     COUNT(CASE WHEN REGION_CODE ='01' THEN FEE_NUMBER ELSE NULL END) AS 'REGION_01',
                     COUNT(CASE WHEN REGION_CODE ='02' THEN FEE_NUMBER ELSE NULL END) AS 'REGION_02',
                     COUNT(CASE WHEN REGION_CODE ='03' THEN FEE_NUMBER ELSE NULL END) AS 'REGION_03',
                     COUNT(CASE WHEN REGION_CODE ='04' THEN FEE_NUMBER ELSE NULL END) AS 'REGION_04',
             COUNT(CASE WHEN REGION_CODE ='05' THEN FEE_NUMBER ELSE NULL END) AS 'REGION_05',
                     COUNT(*) 
           FROM  TA_CASES
           WHERE YEAR(CREATION_DATE)=1998
           GROUP BY Month_Num WITH ROLLUP) AS CA;

    Moving the Prepared Statement into a Stored Procedure

    Placing our code in a stored proc will make running the query a lot easier as it can generate the statement and then execute it to retrieve the results. Here is the code for the stored proc:
    CREATE PROCEDURE `p_case_counts_per_region_by_month`() 
    LANGUAGE SQL 
    NOT DETERMINISTIC 
    CONTAINS SQL 
    SQL SECURITY DEFINER 
    BEGIN  
      SELECT concat(
        "SELECT CASE WHEN Month_Num IS NULL", "\n", 
        "            THEN 'TOTAL'", "\n", 
        "            ELSE Month", "\n", 
        "       END        AS 'Month',", "\n",
        group_concat( DISTINCT concat("       REGION_", REGION_CODE, 
                                      "  AS 'REGION ", REGION_CODE, "',", "\n"
                               )
                      order by REGION_CODE
                      separator '' 
                    ),
        "       TOTAL", "\n",
        "FROM  (     SELECT   MONTH(CREATION_DATE)\t\t\t\t\t\t\t\tAS Month_Num,", "\n",
        "\t\tMONTHNAME(CREATION_DATE)\t\t\t\t\t\t\t\t\tAS 'Month',", "\n",  
        group_concat( 
            DISTINCT concat("\t\t\t\tCOUNT(CASE WHEN REGION_CODE ='", REGION_CODE, 
                            "' THEN FEE_NUMBER ELSE NULL END) AS 'REGION_", 
                            REGION_CODE, "',", "\n"
                     )
            order by REGION_CODE
            separator '' 
        ),
        "            COUNT(*)\t\t\t\t\t\t\t\t\t\t\t\t\tAS 'TOTAL'", "\n",
        "            FROM  TA_CASES", "\n",
        "            WHERE YEAR(CREATION_DATE)=", YEAR(CREATION_DATE), "\n",
        "            GROUP BY Month_Num WITH ROLLUP) AS CA;"
      ) statement
      into @case_counts_per_region_by_month
      FROM TA_CASES
      WHERE YEAR(CREATION_DATE)=1998;
    
      prepare case_counts_per_region_by_month   
      from @case_counts_per_region_by_month;    
      execute case_counts_per_region_by_month;   
      deallocate prepare case_counts_per_region_by_month; 
    END
    Inside the procedure, we generate the SQL for the query as we did above, but within a proc we can save it to a variable using the SELECT INTO syntax. A Prepared Statement is then utilized to execute the generated code.
    A SELECT INTO can only be used where the SQL returns exactly one row; yet another reason that generating the SQL statement as a string works so well!
    A Prepared Statement is a combination of three separate SQL statements:
    • PREPARE prepares a statement for execution.
    • EXECUTE executes a prepared statement.
    • DEALLOCATE PREPARE releases a prepared statement.
    Once the proc has been created, all we need to do is call it by entering the following command line:
    mysql> call p_case_counts_per_region_by_month;
    Here is the record set that is returned by our proc:
    Month REGION 01 REGION 02 REGION 03 REGION 04 REGION 05 TOTAL
    April 13 33 76 2 47 171
    May 17 55 209 1 143 425
    June 8 63 221 1 127 420
    July 13 104 240 6 123 486
    August 18 121 274 9 111 533
    September 25 160 239 2 88 514
    October 9 88 295 2 127 521
    November 2 86 292 2 120 502
    December 1 128 232 6 155 522
    TOTAL 106 838 2078 31 1041 4094
    In the last installment of the Crosstab series, we'll make the proc more generic by moving the columns and table(s) names that we're reporting on to input parameters.
    Source: http://www.databasejournal.com

    вторник, 30 марта 2010 г.

    Apache Lucene 2.9.2 and 3.0.1 Released

    Here’s the announcement:
    Hello Lucene users,
    On behalf of the Lucene development community I would like to announce the release of Lucene Java versions 3.0.1 and 2.9.2:
    Both releases fix bugs in the previous versions:
    - 2.9.2 is a bugfix release for the Lucene Java 2.x series, based on Java 1.4
    - 3.0.1 has the same bug fix level but is for the Lucene Java 3.x series, based on Java 5.
    New users of Lucene are advised to use version 3.0.1 for new developments, because it has a clean, type-safe API.
    Important improvements in these releases include:
    - An increased maximum number of unique terms in each index segment.
    - Fixed experimental CustomScoreQuery to respect per-segment search. This introduced an API change!
    - Important fixes to IndexWriter: a commit() thread-safety issue, lost document deletes in near real-time indexing.
    - Bugfixes for Contrib’s Analyzers package.
    - Restoration of some public methods that were lost during deprecation removal.
    - The new Attribute-based TokenStream API now works correctly with different class loaders.
    Both releases are fully compatible with the corresponding previous versions. We strongly recommend upgrading to 2.9.2 if you are using 2.9.1 or 2.9.0; and to 3.0.1 if you are using 3.0.0.

    Lucene and Solr Development Have Merged

    The Lucene community has recently decided to merge the development of two of its sub-projects – Lucene->Java and Lucene->Solr. Both code bases now sit under the same trunk in svn and Solr actually runs straight off the latest Lucene code at all times. This is just a merge of development though. Release artifacts will remain separate: Lucene will remain a core search engine Java library and Solr will remain a search server built on top of Lucene. From a user perspective, things will be much the same as they were – just better.
    So what is with the merge?

    Because of the way things worked in the past, even with many overlapping committers, many features that could benefit Lucene have been placed in Solr. They arguably “belonged” in Lucene, but due to dev issues, it benefited Solr to keep certain features that were contributed by Solr devs under Solr’s control. Moving some of this code to Lucene would mean that some Solr committers would no longer have access to it – A Solr committer that wrote and committed the code might actually lose the ability to maintain it without the assistance of a Lucene committer – and if Solr wanted to be sure to run off a stable, released version of Lucene, Solr’s release could be tied to Lucene’s latest release when some of this code needed to be updated. With Solr planning to update Lucene libs less frequently (due to the complexities of releasing with a development version of Lucene), there would be long waits for bug fixes to be available in Solr trunk.
    All and all, there would be both pluses and minuses to refactoring Solr code into Lucene without the merge, but the majority have felt the minuses outweighed the pluses. Attempts at doing this type of thing in the past have failed and resulted in diverging similar code in both code bases. With many committers overlapping both projects, this was a very odd situation. Fix a bug in one place, and then go and look for the same bug in similar, but different code in another place – perhaps only being able to commit in one of the two spots.

    With merged dev, there is now a single set of committers across both projects. Everyone in both communities can now drive releases – so when Solr releases, Lucene will also release – easing concerns about releasing Solr on a development version of Lucene. So now, Solr will always be on the latest trunk version of Lucene and code can be easily shared between projects – Lucene will likely benefit from Analyzers and QueryParsers that were only available to Solr users in the past. Lucene will also benefit from greater test coverage, as now you can make a single change in Lucene and run tests for both projects – getting immediate feedback on the change by testing an application that extensively uses the Lucene libraries. Both projects will also gain from a wider development community, as this change will foster more cross pollination between Lucene and Solr devs (now just Lucene/Solr devs).

    All and all, I think this merge is going to be a big boon for both projects. A tremendous amount of work has already been done to get Solr working with the latest Lucene API’s and allow for a seamless development experience with Lucene/Solr as a single code base (the Lucene/Solr tests are ridiculously faster than they were as well!). Look for some really fantastic releases from Lucene/Solr in the future.

    Google App Engine: What Is It Good For?

    As a developer, I'm enthusiastic about cloud computing platforms because they let me spend more time writing web applications and services and less time dealing with scalability and deployment issues. In particular, Google App Engine offers automatic scaling and potential cost savings if you design the applications to run on it with the proper discipline.
    In this article, I provide an overview of the Google Apps Engine platform for developers. Along the way, I offer some tips for writing scalable and efficient Google App Engine applications.

    Google App Engine Overview

    I use Google App Engine for several of my own projects but I have not yet used it on any customer projects. Google engineers use Google App Engine to develop and deploy both internal and public web applications. As you will see, designing applications to run on Google App Engine takes some discipline.

    The Datastore and App Efficiency and Scalability


    The non-relational datastore for Google App Engine is based on Google's Bigtable system for storing and retrieving structured data. Bigtable can store petabyte-sized data collections, and Google uses Bigtable internally for web indexing and as data storage for user facing applications like Google Docs, Google Finance, etc. Bigtable is built on top of the distributed Google File System (GFS). As a developer using Google App Engine, you can also create very large datastores.
    The datastore uses a structured data model, and the unit of storage for this model is called an entity. The datastore is hierarchical, which provides a way to cluster data or to manage "contains" type relationships. The way this works is fairly simple: each entity has a (primary) key and an entity group. For a top-level entity, the entity group will simply be the (primary) key. For example, if I have a kind of entity (think of this as being a type or a class) called a Magazine, I might have an entity representing an issue of this magazine identified with a key value of /Magazine:programingillustrated0101 and the entity group value would be the same as the key. I might have another entity that is an article of kind Article that might have an entity group of /Magazine:programingillustrated0101 and a key of /Magazine:programingillustrated0101/Article:10234518. Thus, you know that this article belongs to this issue of the magazine.
    Entity groups also define those entities that can be updated atomically in a transaction. There is no schema for entities; you might have two entities of kind Article that have different properties. As an example, a second article might have an additional property relatedarticle that the first article does not have. The datastore also naturally supports multiple values of any property.
    The primary technique for making your Google App Engine applications efficient and scalable is to rely on the datastore—rather than your application code—to sort and filter data. The next most important technique is effectively caching data for HTTP requests, which can be reused until the data becomes "stale."

    понедельник, 29 марта 2010 г.

    The Java 7 Features Bound to Make Developers More Productive

    If you've tracked each major Java release, you probably were eager to see what new packages were in each one and how you could employ them in your Java projects. Along the same lines, the next major Java SE release, Java 7, promises several new features across all packages, such as modularization, multi-language support, developer productivity tools, and performance improvement. I think programmers eventually will begin specializing in individual Java packages (i.e., java.util programmers, java.io programmers, java.lang programmers, etc.), but until then, let's explore a few of the notable new developer productivity features slated for Java 7.

    New Objects Class

    The new Objects class of the java.util package provides a fail-proof way for comparing two objects at runtime:
    1. The equals() method of the Objects class does a reference comparison.
    2. The deepEquals() method piggybacks on the first argument's equals() method definition.
    Similarly, when both the arguments are object arrays, Array.deepEquals() is invoked on the objects. The new Objects class provides all the required static utility methods.

    New Classes to Operate on File System

    Java SE 7 provides classes that greatly simplify the age old integration processes of one application dropping files at a predefined shared location and other application picking them up. Java 7 provides a new class WatchService that notifies any events that take place in the file system under the watch.
    The following steps create an asynchronous file-watcher service:
    1. Obtain the path from the File class.
      Path fPath = new File(filePath).toPath();


    2. Obtain a handle to the Watch service from the file system.
      dirWatcher = fPath.getFileSystem().newWatchService();


    3. Register which type of events you are interested in.
      fPath.register(dirWatcher,    
      StandardWatchEventKind.ENTRY_CREATE, 
      StandardWatchEventKind.ENTRY_DELETE, StandardWatchEventKind.ENTRY_MODIFY);


    4. Wait for the event to happen.
      try{
      WatchKey key = dirWatcher.take();
      }catch(InterruptedException ie){
      return;
      }


      The WatchKey class now has all the details of the event that occurred in the directory.

    5. Loop through Step 4 to continue receiving events.

    New Classes for Concurrency Package

    The Java SE team added a wide variety of new classes to Java 7 to cater to various concurrency functionalities. Most notable among them are the RecursiveAction and RecursiveTask classes, which simplify new algorithm development. Understanding the difference between heavyweight and lightweight processes will help you grasp the value of these new classes.
    • A heavyweight process gets a replica of the code, stack, and data from the parent process. You create a heavyweight process by invoking fork().
    • A lightweight process gets its own stack and shares resources and data with other threads or the parent thread. The Unix Thread API standard (POSIX) provides methods to create a thread.
    Java 7 defines a new abstract class called ForkJoinTask, a lightweight process that generates a distinct stream of control flow from within a process. RecursiveAction and RecursiveTask are abstract subclasses of ForkJoinTask.
    To code a recursive call, you must subclass either one of these classes and define the compute() method. The getRawResult() method returns null for RecursiveAction and returns a value for RecursiveTask. The Java 7 documentation provides a simple example for each of these classes.

    Code Development Made Too Easy?

    For me, the joy of being a computer scientist is spending long hours writing code for various algorithms. The problem solving keeps my mind alert, and the computations keep going in my head even in sleep. All the utilities in Java 7 take much of that joy of programming away, but they contribute to the bottom line for the companies supporting Java projects, which is what really matters for Java

    PDF and Java

    I discovered a Java library for PDF from Etymon Consulting. Although it does not cover the full specification, it does provide a convenient approach for reading, changing and writing PDF files from within Java programs. As with any Java library, the API is organized into packages. The main package is
    com.etymon.pj.object
     
    . Here, you'll find an object representation of all PDF core objects, which are arrays, boolean, dictionary, name, null, number, reference, stream, and string. Where the Java language provides an equivalent object, it is used but with a wrapper around it for consistency purposes. So, for example, the string object is represented by PjString.

    When you read a PDF file, the Java equivalents of the PDF objects are created. You can then manipulate the objects using their methods and write the result back to the PDF file. You do need knowledge of PDF language to effectively do some of the manipulations. The following lines, for example, create a Font object:
     
    PjFontType1 font = new PjFontType1(); 
    font.setBaseFont(new PjName("Helvetica-Bold")); 
    font.setEncoding(new PjName("PDFDocEncoding")); 
    int fontId = pdf.registerObject(font);


    where
    pdf
    is the object pointer to a PDF file.

    One thing, I wanted to do was to change parts of the text in the PDF file to create "customized" PDF. While I have access to the PjStream object, the bytearray containing the text is compressed and the current library does not support decompression of LZW. It does support decompression of Flate algorithm.
    Despite some limitations, you can still do many useful things. If you need to append a number of PDF documents programmatically, you can create a page and then append the page to the existing PDF documents, all from Java. The API also provide you with information about the document like number of pages, author, keyword, and title. This would allow for a Java servlet to dynamically create a page containing the document information with a link to the actual PDF files. As new PDF files are added and old ones deleted, the servlet would update the page to reflect the latest collection.
    Listing 1 shows a simple program that uses the pj library to extract information from a PDF file and print that information to the console.
     
    Listing 1.
    import com.etymon.pj.*;
    import com.etymon.pj.object.*;
    
    public class GetPDFInfo {
      public static void main (String args[]) {
       try {
               Pdf pdf = new Pdf(args[0]);
                System.out.println("# of pages is " + pdf.getPageCount());
           int y = pdf.getMaxObjectNumber();
           for (int x=1; x <= y; x++) {
         PjObject obj = pdf.getObject(x);
             if (obj instanceof PjInfo) {
            System.out.println("Author: " + ((PjInfo)
                                                            obj).getAuthor());
            System.out.println("Creator: " + ((PjInfo)
                                                            obj).getCreator());
            System.out.println("Subject: " + ((PjInfo)
                                                            obj).getSubject());
            System.out.println("Keywords: " + ((PjInfo)
                                                             obj).getKeywords());
    
             }
           }
       }
       catch (java.io.IOException ex) {
            System.out.println(ex);
       }
       catch (com.etymon.pj.exception.PjException  ex) {
            System.out.println(ex);
       }   
      }
    }
    
    
    Before you compile the above program, you need to download the pj library, which includes the pj.jar file. Make sure your CLASSPATH includes the pj.jar file.
    The program reads the PDF file specified at the command-line and parses it using the following line:

    Pdf pdf = new Pdf(args[0]);
    It then goes through all the objects that were created as a result of parsing the PDF file and searches for a
    PjInfo
    object. That object encapsulates information such as the author, subject, and keywords, which are extracted using the appropriate methods. You can also "set" those values, which saves them permanently in the PDF file.
    There are a number of sample programs that ship with the pj library, along with the standard javadoc-style documentation. The library is distributed under GNU General Public License.

    Conclusion

    Despite additions and advancements of HTML, PDF continues to be the most popular mean for sharing rich documents. As a programming language, Java needs to be able to interact with data. The pj library shown here, is a preview of how PDF objects can be modeled in Java and then use Java's familiar constructs to manipulate the seemingly complex PDF documents. With this type of interaction, applications that need to serve rich documents can actually "personalize" the content before sending out the document. This scenario can be applied, for example, to many legal forms where a hand signature is still required and the form is too complex to be drawn entirely in HTML. Java and PDF provide a nice solution for these types of applications.

    Selenium: Automated Integration Testing for Java Web Apps

    The value of unit tests is well established and all applications ideally complete a suite of unit tests. However, in the real world, not all applications possess these ideal qualities. In reality, developers have to work with applications that are not well designed/developed and that may not have had any unit testing. This makes modifying/enhancing these applications riskier.
    In such circumstances, running automated integration tests might be quicker and just as effective. The integration tests will allow you to modify/enhance the application with confidence. Integration tests also test the application as a whole, which unit tests do not. Unit tests execute only a part of the application in isolation. While integration tests can detect issues in any of the application's components, unit tests detect issues only within a particular component.

    Automated integration tests can be useful particularly for legacy applications, CRUD applications and applications that have business logic tightly coupled to the environment in which they run. The Selenium web application testing system is a powerful tool for implementing automated integration testing for Java-based web applications. In his Web Developer's Virtual Library (WDVL) article, "Selenium: Automated Integration Testing for Java Web Apps," Avneet Mangat explains automated integration testing with Selenium. You will learn how to develop integration tests using the Selenium IDE, how to export the integration tests as JUnit tests, and then how to automate test execution.

    JSF 2.0 Views: Hello Facelets, Goodbye JSP

    JavaServer Faces (JSF) is a Java component UI framework for building dynamic pages for a web application. JSF technology provides an API for creating, managing, and handling UI components and a tag library for using components within a web page. The new release of JavaServer Faces, JSF 2.0, is a major release for the specification, and it will be part of the Java Enterprise Edition 6 platform.

    This latest release has several interesting features that make the development and deployment of JSF applications simple and easy.

    Unlike JSF 1.x versions, which use JavaServer Pages (JSP) for views, JSF 2.0 mandates support for Facelets as the view technology for JSF pages. Like JSP, Facelets are an implementation of View Declaration

    Language (VDL), which allows developers to declare UI components in different presentation technologies using HTML templates. However, because the Facelets view technology has been designed specifically to leverage the features of JSF, Facelets provide JSF developers with a simpler, more powerful programming model than JSP. That is why beginning with JSF 2.0 Facelets will replace JSP (JSF 2.0 has retained JSP support only for backward compatibility).

    In this article, we explore what makes Facelets superior to JSP for JSF applications, as well as how JSF 2.0 supports them. We use a demo application and provide some code samples to highlight the power of this new technology.

    Facelets Features

    In Facelets, the pages are compiled to an abstract syntax component tree, which gets built to a UIComponent hierarchy during runtime. The Facelets tags don't need declaration in a tag library descriptor (TLD) file. The attributes in the tag are dynamic; they automatically get mapped to the properties. One of the main Facelets features not available in JSP is page templating. In addition, Facelets are faster in execution than JSPs.














    Facelets pages are authored using XHTML, and they provide good expression language (EL) support. Facelets also leverage the concept of XML namespaces to support these tag libraries:
    • JSF HTML Tag Library
    • JSF Core Tag Library
    • JSTL Core Tag Library
    • JSTL Functions Tag Library
    • JSF Facelets Tag Library

    Sun releases Java EE 6

    Almost 10 years to the day since the 1999 release of Java EE 1.2, Sun announced today that Java EE 6 is ready for business.

    In the three years since the last major update to the Java EE platform, a great deal has changed in Java, and EE 6 reflects these changes. Chief among the new features are a slimmed down Web Profile installation of the EE platform, support for RESTful Web services, and the last-minute inclusion of dependency injections.

    Sun also released NetBeans 6.8 and GlassFish Enterprise Application Server version 3. Both of these are compatible with EE 6 and include support for new features. GlassFish, for example, is also available in a Web Profile form, and NetBeans 6.8 adds handlers for REST.

    The Web Profile form of Java EE 6 is a slimmed-down installation of the Java EE ecosystem. Built in response to developer complaints over the years, Java EE 6 adds the concept of Profiles, which will be targeted installations for specific purposes. Initially, only the Web Profile is available, but Sun has said it is looking into more configurations. The Web Profile version of Java EE 6 installs only the pieces of the language and ecosystem needed to run Web applications, such as JPA and JSF.

    GlassFish too can be slimmed down for specific Web purposes. Both the Java EE 6 environment and the GlassFish application server can then be upgraded to the full Java EE 6 stack without the need to change or update applications, said Tom Kincaid, executive director of Sun's Application Platform organization. ”We expect this to be very popular with Web application development and deployment,” he said.

    JSR 330 was a latecomer to the Java EE 6 party. This specification for dependency injections in Java originated at Google. JSR 330 came together and was passed through the JCP this fall, a break-neck pace for the JCP to approve a new specification.

    From the specification page at the JCP website: JSR 330 created “a set of annotations for use on injectable classes” and “a typesafe, user-friendly injector configuration API that provides an integration point for higher-level dependency injection configuration approaches.”

    No fragile fish
    GlassFish Enterprise Application Server version 3 is a commercially licensed form of the open-source GlassFish server project. Owning the commercial version will entitle developers to free software updates and bug fixes, said Kincaid.

    “GlassFish Enterprise Server v3 goes much further with its modular architecture, management and monitoring capabilities, and update center console,” he said.

    GlassFish is now built on an OSGi microkernel, the Apache Felix project, said Kincaid. “Only the necessary modules are loaded at startup. As applications are deployed, only the required modules are loaded,” he said.

    NetBeans 6.8, on the other hand, includes new features that aren't just about Java. NetBeans 6.8 is the first version of the IDE to support PHP 5.3, and that language is growing in popularity among NetBeans users, said David Folk, director of developer tools engineering at Sun. He said that the new version includes packaging and deployment tools to make building applications easier.

    “We provide support for all the Java EE 6 libraries when you're working with [build management tool] Maven,” said Folk, highlighting NetBeans 6.8's enhanced Ant and Maven integrations.

    NetBeans 6.8 is available for free online, while the GlassFish Enterprise Application Server v3 is included with the Java EE 6 release.

    Mono makes headway on the Linux desktop

    Mono—the open-source runtime for .NET applications—is stealing some of the thunder from Java applications for the Linux desktop. Recent Linux distros have featured new .NET consumer applications that run under Mono. Part of the reason is that the distributions contain up-to-date Mono development tools, while their Java tools are obsolete.

    "We have seen a real spike in Mono [application] development for the Linux desktop over the last two to three years," said RedMonk analyst Stephen O'Grady. He cited Mono applications, including the Banshee music player, the GNOME Do desktop search tool, and the Tomboy note-taking application as examples of Mono applications that have no Java counterpart equivalent in popularity.

    The Mono project is an open-source implementation of the Common Language Infrastructure (CLI), a technology that was created by Microsoft and subsequently standardized by ECMA and ISO International. Microsoft supports the Mono effort with technological assistance.

    When it comes to desktop Linux applications, "Mono is clearly more popular than Java. I've been using desktop Linux as my primary desktop for three to four years, and use just a handful of Java apps day to day," O’Grady said.

    If Mono is succeeding, its success appears to be limited to the Linux desktop. "While it's certainly true that Mono has been used to write some nice applications, I have seen very little usage of it among independent developers outside the Linux desktop community. It's also worth noting that many of the high-profile Mono applications are written and maintained by Novell," said Ian Murdock, Debian founder and vice president of emerging platforms at Sun Microsystems.

    "That's a pretty classic platform strategy: Try to get your platform broader distribution (in this case, integrated into the GNOME desktop) by creating compelling applications that require it," he added.

    There’s no good data on how many consumers or developers have installed Mono. The software project team collects some data about package installations on Debian and Ubuntu, but Mono project leader and Novell vice president Miguel de Icaza believes that the data is skewed because tracking is opt-in only. "We publish the source code, and then people redistribute it in packaged form, and we have no way of tracking its use."

    However, it’s clear that Mono is currently doing a better job at attracting Linux desktop application developers than Java.

    Out-of-date tools
    One big reason may be that Debian and Ubuntu, two popular Linux distributions, include current versions of a Mono development environment, called MonoDevelop. By contrast, they contain an old version of Eclipse, a popular open-source IDE. Both Debian and Ubuntu come bundled with Eclipse 3.1, which was introduced in 2005. The Eclipse tool chain is updated annually, and the latest version of the Java tools, Eclipse 3.5, came out in 2009.

    пятница, 26 марта 2010 г.

    Unisys Expands Support for Modernised Applications

    Unisys Expands Support for Modernised Applications and Mobile Devices on ClearPath Mainframes

    New capabilities help clients make it easier for workers to use smartphones and consumer devices to access and manage ClearPath systems

    Unisys Corporation (NYSE: UIS) announced significant enhancements to its ClearPath family of mainframe servers. These enhancements are designed to make it easier for clients to modernise their application environments and enable workers to access and manage ClearPath systems more efficiently through smartphones and other end-user devices.

    Happy 9th Birthday to Apple’s Mac OS X

    CUPERTINO, California—March 21, 2001—Apple today announced that beginning this Saturday, March 24, customers can buy Mac OS X in retail stores around the world. Mac OS X is the world’s most advanced operating system, combining the power and openness of UNIX with the legendary ease of use and broad applications base of Macintosh.

    “Mac OS X is the most important software from Apple since the original Macintosh operating system in 1984 that revolutionized the entire industry,” said Steve Jobs, Apple’s CEO. “We can’t wait for Mac users around the globe to experience its stability, power and elegance.”

    Over 350 applications for Mac OS X are shipping today, with hundreds more coming by this summer. More than 10,000 developer organizations around the world are working on over 20,000 Mac OS X applications, including 4D, Aladdin Systems, Alias/Wavefront, Avid, Connectix, Dantz, Digidesign, EarthLink, FileMaker, IBM, Macromedia, Microsoft, MYOB, Palm, Sun, Symantec, and Thursby Software Systems.

    Apple will also ship Mac OS X versions of its three most popular applications on March 24, available as free downloads at http://www.apple.com: iMovie 2, the world’s most popular and easiest-to-use digital video editing software; iTunes, Apple’s wildly popular “jukebox” software that lets users create and manage their own music library; and a preview version of AppleWorks 6.1, Apple’s award-winning productivity application.

    Mac OS X is built upon an incredibly stable, open source, UNIX-based foundation called Darwin and features true memory protection, preemptive multi-tasking and symmetric multiprocessing when running on the dual processor Power Mac G4. Mac OS X includes Apple’s new Quartz 2D graphics engine (based on the Internet-standard Portable Document Format) for stunning graphics and broad font support; OpenGL for spectacular 3D graphics and gaming; and QuickTime for streaming audio and video. Mac OS X also features an entirely new user interface called Aqua. Aqua combines superior ease of use with amazing new functionality such as the Dock, a breakthrough for organizing, documents and document windows.

    In addition, Mac OS X includes hundreds of new features, such as:
    • Dynamic memory management, eliminating “out of memory” messages or need to adjust the memory for applications
    • Advanced power management, so that PowerBook and iBook systems wake from sleep instantly
    • QuickTime 5, shipping for the first time as an integrated feature of Mac OS X
    • Automatic networking, allowing users to get on the Internet using any available network connection, without adjusting settings
    • A single interface to easily manage all network and Internet connections, including direct support for DSL systems that require PPPoE connectivity
    • Full PDF support and PDF integration into the operating system, so that Mac OS X applications can generate standard PDF documents to be shared with any platform
    • Direct support for TrueType, Type 1 and OpenType fonts, and an intuitive and flexible interface for managing fonts and groups of fonts
    • More than $1,000 of the best fonts available today, including Baskerville, Herman Zapf’s Zapfino, Futura, and Optima; as well as the highest-quality Japanese fonts available, in the largest character set ever on a personal computer
    • iTools integration into Mac OS X, for direct access to iDisk free Internet storage in the Finder and Open/Save dialog boxes, and free IMAP mail for Mac.com email accounts
    • Built in support for popular HP, Canon, and Epson printers
    • Easy to administer multi-user environment, with access privileges to keep documents secure
    • Powerful web development tools and technologies such as WebDAV, XML, Apache and QuickTime
    • BSD UNIX services including popular shells, Perl and FTP
    • Support for symmetric multi-processing, so that on dual-processor Power Mac G4 systems, both processors are used automatically to deliver up to twice the productivity
    • File system and network security including support for Kerberos
    • Support for Java 2 Standard Edition built directly into Mac OS X, giving customers access to cross platform applications

    Apple’s successful Mac OS X Public Beta, which shipped in September 2000, was instrumental in several key enhancements to the operating system. Apple shipped more than 100,000 copies of Mac OS X Public Beta and received more than 75,000 individual user feedback entries from Mac users and developers worldwide.

    To help customers migrate to Mac OS X, Apple iServices will offer several new services, including a comprehensive set of Mac OS X training and certification offerings for Mac OS X system administrators.

    Pricing & Availability
    Mac OS X will ship with 7 languages—English, Japanese, French, German, Spanish, Italian and Dutch— included on a single CD. In addition, the Mac OS X box will include a full copy of Mac OS 9.1, for running Classic applications, and the Mac OS X Developer Tools CD.

    Mac OS X will be available through The Apple Store and through Apple Authorized Resellers for a suggested retail price of $129 (US) beginning March 24, 2001.

    Mac OS X requires a minimum of 128MB of memory and is designed to run on the following Apple products: iMac, iBook, Power Macintosh G3, Power Mac G4, Power Mac G4 Cube and any PowerBook introduced after May 1998.

    Source: Apple

    Syncro Soft Announces New Editions of XML Editor and Author

    Syncro Soft Ltd, the developer of Oxygen XML Editor and Author, has announced the immediate availability of version 11.2 of its XML Editor and XML Author tools.

    The Oxygen XML tools combine content author features like the CSS driven Visual XML editor with a fully featured XML development environment, and includes ready-to-use support for the main document frameworks DITA, DocBook, TEI, and XHTML, as well as support for all XML Schema languages, XSLT/XQuery debuggers, WSDL analyzer, XML Databases, XML Diff and Merge, Subversion client and more.

    Version 11.2 of the Oxygen XML tools includes improvements to the XML authoring and development tools, improved support for large files, an SVN client, and the addition of visual XML editing as a separate component that can be integrated into Java and web applications. Also included is a new spell-checking engine.
    The Oxygen XML tools are available through a variety of licensing methods and editions.