...making Linux just a little more fun!
Constantine Shulyupin [const at linuxdriver.co.il]
The Linux kernel is one of most complex open source project. There is a lot of books, however it is still a difficult subject to comprehend. The Interactive map of Linux kernel (http://www.makelinux.net/kernel_map) gives you top-down view to the kernel. You could see most important layers, functionalities, modules, functions and calls. You can zoom in and drag around to see details. Each item on the map is hypertext link to source code or documentation.
-- Constantine Shulyupin Freelance Embedded Linux Engineer http://www.linuxdriver.co.il/
Rick Moen [rick at linuxmafia.com]
Thread was about software options for "syncing" smartphones with calendar/mail servers (more or less).
----- Forwarded message from Rick Moen <rick@linuxmafia.com> -----
Date: Sun, 21 Sep 2008 17:20:36 -0700 From: Rick Moen <rick@linuxmafia.com> To: luv-main@luv.asn.au Subject: Re: Linux (sendmail/imap) and Mobile Phone accessQuoting James Harper (james.harper@bendigoit.com.au):
> According to Slashdot, Zarafa has just been open sourced.
Rules of thumb:
1. Be skeptical of any claim that a Web 2.0 application is open source until you've carefully examined the actual licence. There's been an extremely large amount of cheating, deception, and outright lying about licensing in this market.
2. Be doubly, super-skeptical of those claims when you go through the project's Web front page and find three claims of open sourcing but nothing whatsosever about the licence.
In this case, a great deal of digging, one finally, on a subpage, finds a Sept. 18 press release saying that the "full core" of the Zarafa platform will be at some unspecified time be available under Affero GPLv3. Affero GPLv3 is indeed a genuinely open source licence targeted at the Web 2.0 / SaaS / hosted-software / ASP market -- even though it isn't yet OSI-certified.
The real remaining question, then, is whether the "full core" -- whatever that means -- will be turn out to be a viable, useful piece of software. Another scam that's extremely common in Web 2.0 space is to have a deliberately buggy, undocumented, incomplete "community" version under open source licensing, functioning strictly as a sales come-on to upsell users to a "commercial" (proprietary) version.
I'm trying to remember the leading example. Hmm, Sug[censored for reasons of legal self-defence]
If you want to know without doing a pilot installation of your own, wait for comments on the Debian package. ;->
----- End forwarded message -----
[ Thread continues here (4 messages/6.73kB) ]
Jim Jackson [jj at franjam.org.uk]
Hi, all
A friend is looking for recommendations for cross platform software, ideally linux, MacOS-X, Microsoft - "an address book/ diary programme that can be synchronised with other computers. Ideally this would work like the old Psion one, where it can be synchronised automatically with Lotus Organizer or Microsoft Outlook on an office computer."
Any sage advice?
Jim
[ Thread continues here (2 messages/1.46kB) ]
Vesna [greemlimusine at chariot.net.au]
Hi I a, new to Linux I have installed Vector Linux on my machine but unfortunately where I live only dial up internet is available and I can only dial up the internet if I am signed in as root, if I sign in with the other account, chestnut dialer will not dial up at all it keeps telling me it cannot open the modem, is there a way to be able to use the other account to dial up the internet since I keep getting the warning message that I shouldn't run the computer in root?Thanks Vesna
[ Thread continues here (3 messages/4.62kB) ]
Britto I [britto_can at yahoo.com]
Folks
For some reason i want one my service should use the port < 1024
It says access denied for the users other than root. Can we have any workaround for this
-- Britto
[ Thread continues here (6 messages/4.86kB) ]
saikiran gundeti [saikirangundeti at gmail.com]
Hi,
Could you please provide for remote tape backups with cpio or tar.
---------------------- saikirangundeti
[ Thread continues here (2 messages/1.05kB) ]
M.Rashid Shahzad [donsmooker at yahoo.com]
Please Reply the bad sector removing method and bad sector removing the software name or this software use for the procedure.All method reply the easy words. Please help this working.Thanks.Bye
[ Thread continues here (6 messages/5.55kB) ]
Martin [martin at marcher.name]
Hello,
according to your author FAQ[0] under "Copyright Issues" you state that you will publish the works sent to you under the Open Publication License. The Wikipedia article states that the author of the OPL recommends using a Createive Commons License as a replacement. I understand that there are issues with the existing articles which may prevent relicencing them but for new articles would it be OK if I'd like to articles to be published under a Creative Commons License - personally I prefer "by-nc-sa" for written works and "by-sa" for code works.
Given that efforts to keep the OPL updated seem to have stopped (actually never made it beyond a version 1.0) at 1999-07-08 I think switching to a CC license should be considered, maybe even let authors choose between
* default (stay with OPL) * the CC options
any thoughts on that?
regards martin
PS: no I don't think you can't google but I guess it's a lot easier to just click the link of my sources, we all got enough work to do.
[0] http://linuxgazette.net/faq/author.html#policy
[1] http://en.wikipedia.org/w/index.php?title=Open_Publication_License&oldid=230659639
[2] http://creativecommons.org/licenses/by-nc-sa/3.0/
-- http://www.xing.com/profile/Martin_Marcher You are not free to read this message, by doing so, you have violated my licence and are required to urinate publicly. Thank you.
[ Thread continues here (11 messages/51.39kB) ]
Joey Prestia [joey at linuxamd.com]
Hi all,
I am wanting to gather some information about using DNSBL on mail servers. I have been reading the information on most of the more popular used blacklists like Spamcop and Spamhaus. Now I have come up with all kinds of questions on the subject.
I would like to hear from any mail server administrators of their experiences with these methods of rejecting spam at the "gate". It seems apparent that one must gage what type of spam and what type of lists to use very carefully because of the possibility of refusing valid mail?
Is the implementation of using a DNSBL definitely something mail server administrators should consider?
Is it common practice to use spamassassin and DNSBL together to reduce bombardment of spam?
Although I have been using spamassassin for some time and see that it does a very good job of filtering and correctly labeling mail. Also the majority seems it could be prevented altogether by implementing the correct DNSBL or DNSBL's at the mail server level as I can see by spamassassin headers.
One thing I have heard is that it is not a good practice to put into effect something like this because many bigger institutions can and periodically do get put on blacklists, through no fault of their own. One example I have seen: http://www.stanford.edu/services/email/antispam/blacklist.html is this an accurate representation of some of the possible effects of this being put into practice?
Any recommendations as to suggested best practices in using these measures?
Thanks,
-- Joey Prestia L. G. Mirror Coordinator http://linuxamd.com Main Site http://linuxgazette.net
[ Thread continues here (9 messages/19.08kB) ]
[ In reference to "A Slightly Advanced Introduction to Vim" in LG#152 ]
Eric Deschamps [erdesc at free.fr]
Hi,
This article is being translated in French as you can see on this page : http://www.traduc.org/docs/lgazette/nouvel_etat_gazette.php?1.152.2
I am the reader of the translated article and I think there is a tiny error at the Section "Finding and Substituting", the last example :
:1,'b/dog/cat/ceg
is missing the 's' to substitude and should be, I guess :
:1,'bs/dog/cat/ceg
isn't it?
Thank you for this good article, I am learning a lot!
Regards,
Eric
[ Thread continues here (5 messages/7.15kB) ]
Silas S. Brown [ssb22 at cam.ac.uk]
A quick "download whole directory as zip file" CGI
If you have a large collection of files, and you put them on your webserver without any special index, then it's likely that the server will generate its own index HTML for you. This is all very well, but I recently had the tedious experience of downloading 46 separate small files from my webserver, using somebody's Windows box with Internet Explorer and a "download manager" that took me through 3 dialog boxes per click in a foreign language. Wouldn't it be nice if I could tell the web server to zip them all up and send me the zip file.
You can do this because the Unix "zip" utility (package "zip" on most distributions) is capable of writing to standard output. At a minimum, you can create a CGI script like this:
#!/bin/bash echo Content-Type: application/zip echo "Content-Disposition: attachment; filename=files.zip" echo zip -9r - *
This zips the content of the current directory, sending the result to standard output (that's what the dash - is for) and telling the Web browser that it's a zip file called files.zip.
But we can go one up on that - the following short script will list the contents of the directory, with an optional "download as zip" link that sets the filename appropriately. If you're using the small Mathopd webserver, you can edit /etc/mathopd.conf and set AutoIndexCommand to the path of this script:
export Filename="$(pwd|sed -e 's,.*/,,').zip" if test "$QUERY_STRING" == zip; then echo Content-type: application/zip echo "Content-Disposition: attachment; filename=$Filename" echo zip -9r - * else echo "Content-type: text/html; charset=utf-8" echo echo "<HTML><BODY><A HREF=\"..\">Parent directory</A> |" echo "<A HREF=\"./?zip\">Download $Filename</A>" echo "<h2>Contents of $Filename</h2><UL>" for N in *; do echo "<LI><A HREF=\"$N\">$N</A> ($(du -h "$N"|cut -f1))</LI>" done echo "</UL></BODY></HTML>" fi
This assumes that any non-ASCII filenames will be listed in UTF-8 (otherwise change the charset).
-- Silas S Brown http://people.pwf.cam.ac.uk/ssb22
[ Thread continues here (21 messages/44.49kB) ]
By Deividson Luiz Okopnik and Howard Dyckoff
Contents: |
Please submit your News Bytes items in plain text; other formats may be rejected without reading. [You have been warned!] A one- or two-paragraph summary plus a URL has a much higher chance of being published than an entire press release. Submit items to bytes@linuxgazette.net.
The Xen.org community has released the Xen 3.3 engine which includes enhancements in performance, scalablity, and security.
The new engine handles workloads with high degrees of multiplexing and supports optimal power usage per server. In addition, Xen utilizes both hardware and software techniques to enable it to virtualize both current and legacy operating systems.
Xen 3.3 also supports the new Xen Client Initiative (XCI), a Xen.org community effort to accelerate and coordinate the development of fast, free, compatible embedded Xen hypervisors for laptops, PCs and PDAs. The XCI is targeting three use cases: using Xen to run 'embedded IT' VMs that allow remote support, security, and service of PCs through embedded IT applications without any impact on the user's primary desktop OS; 'instant on' applications that can be immediately available as separate VMs from the user's primary desktop OS; and 'application compatibility' VMs, which allow legacy PC applications to run as VMs, alongside the user's primary desktop OS.
Intel has contributed hardware and software virtualization capabilities to ensure that Xen-based solutions take full advantage of next-generation microprocessor technologies. "Xen 3.3 is optimized for Intel's next generation micro-architecture Nehalem features: enhanced power management, performance, I/O and networking features for building flexible resource management solutions in an energy-efficient data center," said Imad Sousou, Director of Intel Open Source Technology Center.
The release is now available for download from the Xen.org community site and is the product of a distributed development effort by senior engineers from more than 50 leading hardware, software, and security vendors. For more information, please visit www.xen.org.
The first Xen Summit of 2009 will occur February 24 - 25 and will be hosted by Oracle in Redwood Shores, CA.
At VMworld 2008 in September, VMware announced its vClient Initiative. vClient is aimed at delivering universal clients - desktops that follow users to any end point while providing an experience that is secure, cost effective, and easy for IT to manage.
This is more significant because vClient is based on ESX, VMware's bare-metal hypervisor. The underlying technology is a step forward for desktop virtualization. With ESX, there is no other host environment like a Linux OS. This also allows an enterprise to project a single, virtualized desktop image that could run on all user computers, regardless of installed OS. IT administrators will be able to manage, update, and secure all of these desktops centrally.
The first step of the initiative is the roll out of VMware View, a set of products that extend VMware's Virtual Desktop Infrastructure (VDI) to include both server-hosted virtual desktops and client virtual desktops that can run on any laptop or desktop computer. VMware View is actually a portfolio of products that give end users a single view of all their applications and data in a familiar, personalized environment - "myview" - that is available from any device and location.
The vClient Initiative also includes several new desktop virtualization technologies previewed at VMworld which VMware will introduce in 2009. These new technologies include client virtualization, image management (available as VMware View Composer) and an offline desktop. Together, these technologies make desktops, applications and user profiles secure and fully mobile. Partners including Fujitsu Siemens, HP and NEC have embraced and endorsed VMware's universal client and the benefits in extending virtualization from the datacenter to the desktop.
VMware View products encapsulate operating systems, applications, and user data into isolated layers, giving organizations the ability to update and deploy each element independently. Additional technologies that will help simplify desktop management include:
- VMware View Composer, an advanced desktop image management tool that
simplifies updating and patching desktop users while retaining individual
user data and settings.
- Offline Desktop, which enables users to access their server-hosted desktops
offline.
VMworld, now in its fifth year, with more than 14,000 attendees, focuses on products and best practices for virtualizing the enterprise. For more information, visit: http://www.vmware.com/go/vclient
The next Linux Foundation Legal Summit will take place October 14 - 15, 2008 at IBM's Hawthorne, New York facility. This event follows the Legal Summit in April of this year that took place at Motorola Customer Briefing Center in Schaumburg, Ill.
The first LF Legal Summit was held October 25 - 26, 2007 in Boston and focused on effective ways to collaborate in the development of compliance, education programs, and evolving intellectual property rights policies optimized to support open development models.
For more details on this fall's Legal Summit, please visit: https://www.linuxfoundation.org/events/legal
Engineers and researchers at the IBM's Hursley development lab in England and the Almaden Research Center in California have demonstrated groundbreaking performance results that outperform the world's fastest disk storage solution by over 250%.
The results were achieved using flash solid-state technology coupled with IBM's industry-leading, highly scalable storage virtualization technology. Under the codename "Project Quicksilver," IBM achieved groundbreaking results in transferring data at a sustained rate of over one million Input/Output (I/O) per second -- with a response time of under one millisecond (ms). Compared to the fastest industry benchmarked disk system(1), Quicksilver improved performance by 250% at less than 1/20th the response time, took up 1/5th the floor space and required only 55% of the power and cooling.
Quicksilver is part of IBM's focus on leveraging emerging systems technologies to deliver advanced data center capabilities. Performance improvements of this magnitude can have profound implications for business, allowing 2 to 3 times the work to be completed in a given timeframe for classic workloads and creating opportunity for entirely new information warehouse and analytic solutions.
For more information about IBM and IBM System Storage, visit http://www.ibm.com/storage.
HP reached an unprecedented milestone in mobile computing: up to 24 hours of continuous notebook operation on a single battery charge. As measured by an industry-standard benchmark, the new HP EliteBook 6930p configured with an optional ultra-capacity battery delivered up to 24 hours of battery runtime.
"All-day computing has been the holy grail of notebook computing," said Ted Clark, senior vice president and general manager, Notebook Global Business Unit, HP. "With the HP EliteBook 6930p, customers no longer have to worry about their notebook battery running out before their work day is over."
This achievement required new equipment in a unique configuration: energy dense HP batteries, new Intel solid-state hard drives (SSD), and a new HP mercury-free LED display. For example, the highly efficient HP Illumi-Lite LED display boosts battery run time by up to 4 hours compared to traditional LCD displays, while the Intel SSD provides up to a 7 percent increase in battery life. Special graphics drivers and a special BIOS were also required.
With up to 24 hours of battery life, business travelers can easily use their laptop on the world's longest scheduled commercial airline flight - linking Newark International Airport and Singapore Changi Airport - approximately 18 hours, 40 minutes. This EliteBook configuration will be available starting in October.
In addition to helping achieve outstanding battery life, the new Intel SSDs provide greater reliability as well as faster system responsiveness.
The ENERGY STAR HP EliteBook 6930p configuration starts at only 4.7 pounds (2.1 kg) and features a 14.1-inch diagonal widescreen display; the mercury-free Illumi-Lite LED is optional.
Sun's developer conference, Tech Days, will travel the globe again to share expertise with the community about the Solaris Operating System, Java technology, JavaFX, MySQL database, NetBeans, GlassFish, OpenSolaris, Sun Studio software, scripting languages, mobile and telecommunications technologies, open source, web application development, and more. Sun posted the details of the Sun Tech Days worldwide developer conferences for 2008-2009 at http://developers.sun.com/events/techdays/index.jsp.
Sun Tech Days is a 13-city world tour showing how developers can leverage Sun technologies to drive industry innovation. Since 2000, Sun Tech Days has seen a six-fold increase in attendance numbers and now reaches over 100,000 developers each year via local events around the world.
Each Tech Days event for 2008-2009 will host Java, MySQL, and OpenSolaris tracks. The Tech Days program will also include:
- Demo Showcase: Six demos highlighting JavaFX, mashups, social
computing, OpenSolaris, PHP/NetBeans and cloud computing.
- Lighting Talks: Content organized and delivered by community speakers
around the different track topics.
In addition, some of the Sun Tech Days events will have an extra day of deep dive content where attendees can choose from either a dedicated advanced training session on the NetBeans IDE, highlighting new features and solutions; or a targeted event for students illuminating challenges and opportunities with next generation technology innovation.
Sun Tech Days Worldwide Developer Conference schedule:
* Sao Paulo, Brazil: September 29-October 1, 2008
* Seoul, South Korea: October 15-17, 2008
* Taipei, Taiwan: November 17, 2008
* Guangzhou, China: November 19, 2008
* Beijing, China: November 21-23, 2008
* Tokyo, Japan: December 2-4, 2008
* Singapore, Singapore: January 20-22, 2009
* New York, United States: February 6-7, 2009
* Hyderabad, India: February 18-20, 2009
* London, United Kingdom: March 25-27, 2009
* Madrid, Spain: March 31-April 2, 2009
* St. Petersburg, Russia: April 8-10, 2009
* Tel Aviv, Israel: May 5-7, 2009
Join us at the Paradise Point Resort for this symposium covering the most innovative systems research, including over 20 high-quality papers in areas including OS architecture, cloud computing, and monitoring, as well as a poster session. Don't miss the opportunity to gather with researchers from across the systems community in what has become a premier forum for discussing the design, implementation, and implications of systems software.
Register by November 17 and save $150!
http://www.usenix.org/osdi08/lgbOn August 15, 2008, Solaris 7 exited EOSL Phase 2. Except through custom support agreements, all support for Solaris 7 is now terminated.
On April 1, 2009, Solaris 8 enters EOSL Phase 2. Vintage patch entitlement (for patches developed on or after April 1, 2009) requires purchase of the Solaris 8 Vintage Patch Service.
Sun announced the release of Sun HPC ClusterTools 8.0, expanding the scope of its HPC software solutions. Sun HPC ClusterTools 8.0 is a production quality MPI (Message Passing Interface) implementation based on the Open MPI 1.3 open source project. In addition to Intel x86, AMD x86, and Sun UltraSPARC-based servers, Sun now offers complete software stack for both Linux and Solaris OSs, with all the innovations of OpenSolaris now available to the HPC community.
Sun HPC ClusterTools 8 software is an integrated toolkit that allows developers to create and tune Message-passing Interface (MPI) applications that run on high performance clusters and SMPs. Sun HPC ClusterTools software offers a comprehensive set of capabilities for parallel computing.
For more information,visit Sun's On the Record blog: http://blogs.sun.com/ontherecord/entry/get_your_free_unrestricted_copy and Kuldip Oberoi's blog: http://koberoi.com/2008/09/04/sun-hpc-clustertools-80-now-available-for-solaris-linux/
The beta release for the next version of Fedora occured at the end of September.
The final GA release is scheduled for mid-November. Local Fedora 10 Release Parties & InstallFests are tentatively scheduled for November 15th. Check here for updates: http://fedoraproject.org/wiki/FedoraEvents
One Laptop per Child will be selling its XO Laptop on Amazon.com in late 2008 as part of a renewed global "Give One, Get One" (G1,G1) program. Although the One Laptop per Child views the first iteration of the G1,G1 program as extremely successful (it sold more than 185,000 laptops), the delivery of the laptops in the USA did not run smoothly last year and its fulfillment agent suffered processing and shipping problems. Some buyers did not receive their XOs until March.
This year, they will be selling their neo-laptops on Amazon.com. Perhaps size matters for some things. OLTP has a staff of 2 dozen. They expect to do better with Amazon, which has the resources to process and ship the laptops globally.
In addition, and contrary to some media reports, the Xmas XO will be an only Linux-based laptop and not a dual-boot machine running both a pre-shrunk Windows and Linux.
In conjunction with the Xmas sales plan, Sugar Labs is currently working to get the 8.2 software release finished so it can be loaded onto the Xmas XOs. In addition to interface and preformance improvements, the coming release of Sugar will allow the WiFi interface to be turned off when using the XO as a book reader. This and a Deep suspend mode which consumes only 0.5W will allow battery life to stretch up to 40 hours.
Google is celebrating its tenth birthday/anniversary with a new project to promote positive change in the world. From their Web site: "Project 10^100 (pronounced "Project 10 to the 100th") is a call for ideas to change the world by helping as many people as possible."
More excerpts:
"10100 is another way of expressing the number "googol," a one followed
by one hundred zeroes. Our company's very name expresses our goal of
achieving great results through smart technology that starts small and
scales dramatically over time to have a tremendous long-term impact.
Project 10100 is a similar attempt to produce those kinds of scalable
results by harnessing our users' insights and creativity. We don't know
what ideas would help the most people. This project's premise is that maybe
you do."
Entries must be submitted by October 20, 2008. A youtube video is an optional part of the entry.
The categories for submissions are:
Community: How can we help connect people, build communities and protect
unique cultures?
Opportunity: How can we help people better provide for themselves and their
families?
Energy: How can we help move the world toward safe, clean, inexpensive
energy?
Environment: How can we help promote a cleaner and more sustainable global
ecosystem?
Health: How can we help individuals lead longer, healthier lives?
Education: How can we help more people get more access to better
education?
Shelter: How can we help ensure that everyone has a safe place to live?
Everything else: Sometimes the best ideas don't fit into any category at
all.
Google has committed $10 million to fund up to five ideas selected by an
advisory board. A selection of Google employees will review all the ideas
submitted and select 100 for public consideration. The 100 top ideas will
be announced on January 27, 2009, at which point Google will invite the
public to select twenty semi-finalists. An advisory board will then choose
up to five final ideas for funding and implementation. The following five
criteria will be considered by the advisory panel in evaluating and
selecting the winning ideas:
Reach: How many people would this idea affect?
Depth: How deeply are people impacted? How urgent is the need?
Attainability: Can this idea be implemented within a year or two?
Efficiency: How simple and cost-effective is your idea?
Longevity: How long will the idea's impact last?
More information at: http://www.project10tothe100.com
Citrix Systems has announced Citrix Cloud Center (C3) for service providers. The earlier Citrix Delivery Center product enabled enterprise customers to transform their datacenters into delivery centers, delivering apps and desktops as a "cloud-like" service to their end users. The new C3 product family extends these same benefits to Internet-based cloud providers, providing them with a complete set of service delivery infrastructure building blocks for cloud computing.
Citrix earlier this year introduced per-server pricing for XenServer 5, allowing customers to take full advantage of rapidly increasing CPU and memory capacity of modern servers without paying per CPU core. XenServer 5 also features new XenConvert P2V and V2V conversion tools. Citrix is now introducing consumption-based pricing for C3, to meet the needs of service providers that charge their customers based on their metered resource use. Single developer copies of XenServer are free, but cost $900 each for multi-server licensing and up to $3000 for the Enterprise Edition with support for clustering and XenMotion. It is based on the earlier 3.2 version of the Xen engine.
With C3 comes the new XenServer Cloud Edition, a virtual infrastructure solution optimized for service provider environments and available as a stand-alone virtual infrastructure platform or as an upgrade pack to open source Xen implementations. Other Citrix Delivery products that comprise C3 are NetScaler for optimization of application performance in the cloud, WANScaler to bridge the enterprise to the cloud and Workflow Studio to provide a rich orchestration capability. Workflow Studio also makes it easy for customers to integrate Citrix solutions with third-party products to automate end-to-end application delivery processes.
Citrix Cloud Center (C3) is available immediately and includes XenServer Cloud Edition, Citrix NetScaler and Workflow Studio Tech Preview.
A summary of new features for XenServer 5 is available here:
http://static.slideshare.net/swf/ssplayer2.swf?doc=[...]presentation
In September, Sun Microsystems announced Sun xVM Server and Sun xVM Ops Center 2.0, key components in its comprehensive virtualization strategy. Additionally, Sun launched xVMserver.org, a new open source community, where developers can download the first source code bundle for Sun xVM Server software and contribute to the direction and development of the product at http://www.xvmserver.org. Additional pieces of the Sun xVM virtualization portfolio, including Sun xVM VirtualBox software and Sun xVM Ops Center, are open source and freely available for immediate download at http://www.openxvm.org.
With the release of Sun xVM Ops Center 2.0, Sun provides integrated management of virtual and physical environments. The new release adds virtual guest management to its existing ability to manage physical infrastructures, making it easier for users to manage thousands of geographically distributed systems simultaneously. Sun xVM Ops Center simplifies the discovery, provisioning, updating, monitoring, and reporting of physical and virtual assets, as well as compliance reporting via one unified browser-based interface.
With the new Sun xVM Server software, Sun delivers an easy-to-use, open source, datacenter-grade server virtualization solution to virtualize and manage heterogeneous workloads, including Windows, Red Hat and SUSE Linux, Solaris and OpenSolaris operating systems, on Sun x86 platforms and SPARC-based servers. Sun xVM Server software provides built-in management through a browser, enterprise-class scalability, reliability, and security. Sun xVM Server software is designed to inter-operate with VMware and uses the same virtual hard disk and virtual appliance formats, enabling customers to easily move workloads between VMware ESX and Sun xVM Server software.
Sun xVM Server is open source software but support subscriptions can be purchased in 4 socket increments at $500/yr. Sun xVM Infrastructure Enterprise costs $2000 in 4 socket increments for unlimited users.
xVM Ops Center can be integrated with system management tools from Dell, HP, and other system vendors and can manage over 50 K CPU cores.
Sun estimates that customers running Sun xVM Server software can expect up to 50% savings in their virtualization infrastructure costs when compared to offerings from leading competitors.
Sun Microsystems released a new version of Sun xVM VirtualBox, its high performance, free and open source desktop virtualization software, as well as Sun xVM VirtualBox Software Enterprise Subscription, a new offering for 24/7 premium support for enterprise users. xVM VirtualBox software is the first major open source hypervisor to support the most popular host operating systems, including Mac OS X, Linux, Windows, Solaris and OpenSolaris. To download the software and sign up for an enterprise support subscription, visit http://www.sun.com/software/products/virtualbox/get.jsp
Users of xVM VirtualBox 2.0 software will now benefit from new support for 64-bit versions of operating systems like Windows Vista and RedHat Enterprise Linux, in addition to all other major host operating systems. The software also offers a new user interface for the Mac platform, improved networking for the Mac OS X and Solaris OS, as well as improved performance, especially on AMD chips. Additionally, customers who purchase an enterprise subscription will also receive a Right-to-Use License, allowing them to deploy the xVM VirtualBox platform using their own software deployment tools. Subscriptions start at $30 (USD) per user per year, which includes 24/7 support, and discounts are available based on volume.
xVM VirtualBox software lets users create "virtual machines" into which they can install their operating system of choice, so developers can easily build, test and run cross-platform, multi-tier applications on a single laptop or desktop computer. Only a 20 megabyte download, xVM VirtualBox software installs in less than five minutes.
Sun's xVM VirtualBox software has received awards and positive reviews from top industry experts. Most recently, the software won InfoWorld's 2008 Bossies (Best of Open Source Software) Awards in the "Best of open source platforms and middleware: Desktop virtualization" category.
HP has new products and solutions to help customers simplify their virtualized environments. Announced at VMworld 2008, HP's new offerings provide support for VMware technologies in four key areas: management software, virtualization services, virtual desktop infrastructure (VDI), and server and storage infrastructures.
HP Data Protector software simplifies and centralizes automated data protection and recovery operations. This includes increased availability of critical applications with Zero Downtime Backup and Instant Recovery capabilities. HP Data Protector Zero Downtime Backup and Instant Recovery tightly integrate with HP StorageWorks Enterprise Virtual Arrays, giving customers zero-impact backup of mission-critical application data residing on virtual machines. This integration also provides the recovery of both the virtual machine and critical data in minutes instead of hours.
HP Insight Dynamics - VSE is an integrated solution to visualize, plan and change physical and virtual resources in the same way improving data center efficiency . Combined with VMware VirtualCenter, the solution provides for high-availability and simplified provisioning of resources across the data center. HP Insight Dynamics - VSE with VMware VirtualCenter can pre-emptively move virtual machines to a different hardware platform before any downtime occurs.
New HP VDI Services is a desktop replacement solution that provides security for data and applications on a desktop and lowers the cost of desktop life cycle management, while providing users with the experience of a standard desktop.
HP's thin client portfolio has been certified for VMware Virtual Desktop Manager, VMware's connection broker. This certification ensures HP customers of easy deployment of VMware VDI with any HP thin client.
HP is offering a product for automated disaster recovery for virtualized environments. HP and VMware have worked together to develop an integrated, simple and automated disaster recovery solution for virtual environments. This offering combines VMware Site Recovery Manager, HP StorageWorks Enterprise Virtual Arrays (EVA) and HP Continuous Access Replication EVA Software.
HP's support for VMware Site Recovery Manager provides customers of all sizes with reliable disaster recovery technology to protect their business-critical applications. The HP EVA's dual-redundant hardware architecture eliminates single points of failure.
Talkback: Discuss this article with The Answer Gang
Deividson was born in União da Vitória, PR, Brazil, on 14/04/1984. He became interested in computing when he was still a kid, and started to code when he was 12 years old. He is a graduate in Information Systems and is finishing his specialization in Networks and Web Development. He codes in several languages, including C/C++/C#, PHP, Visual Basic, Object Pascal and others.
Deividson works in Porto União's Town Hall as a Computer Technician, and specializes in Web and Desktop system development, and Database/Network Maintenance.
Howard Dyckoff is a long term IT professional with primary experience at
Fortune 100 and 200 firms. Before his IT career, he worked for Aviation
Week and Space Technology magazine and before that used to edit SkyCom, a
newsletter for astronomers and rocketeers. He hails from the Republic of
Brooklyn [and Polytechnic Institute] and now, after several trips to
Himalayan mountain tops, resides in the SF Bay Area with a large book
collection and several pet rocks.
Howard maintains the Technology-Events blog at
blogspot.com from which he contributes the Events listing for Linux
Gazette. Visit the blog to preview some of the next month's NewsBytes
Events.
On the surface, these are two very different conferences with two different kinds of attendees. Looking at two obvious differences, Better Software was again held in Vegas, a tradition it continues to uphold (praise Elvis!), while Google I/O, the first of a promised series of annual events, was held at the Moscone Center in San Francisco. The original Google Developer Day was held in nearby Santa Clara last year, and then repeated around the globe; since Google's Palo Alto HQ provides an anchor of sorts, Google I/O is likely to remain a Bay Area event.
Both occupy the late spring calendar, when IT budgets push up against the end of the corporate fiscal year and funds might be tightening; one occurs as Vegas temps break into the triple digits (and stay there) and the other as the summer fog layer is forming on SF Bay. Both, however, are aimed at the developer community and both look for those elusive "Best Practices".
The Better Software conference is organized in part by Better Software magazine, formerly Software Testing and Quality Engineering (or STQE) magazine. Better Software tries to treat its attendees like old friends returning home - so naturally some of them sport graying hair and are a bit pear-shaped. These are the tech leads, project managers, and software architects from the Web 1.0 revolution and some revolutions before that. There are also security, test, and Q/A engineers. Still, Google would be interested in hiring many of them, because these are very senior and experienced people who all have a good understanding of the arcane process of software development.
Better Software magazine teamed up with Agile methods trainer SQE (Software Quality Engineering) which also runs the StickyMinds.com Web site (an on-line resource for developers and project managers) to produce this annual event, as well as the annual Agile Development Practices conference. So these folks also have Agile religion and an Open Source bias, and that's really refreshing! They are about methodology, not commercial interests, so there are no "suits" (corduroys, maybe, but no wool suits...)
For the tech sessions, the conference planners correctly separated Agile Development from Agile Management. There were also tracks for Security and Testing and Q/A as well as Requirements, Software Metrics, and Design and Architecture. The keynotes reflected the main interest of the attendees, which is effectively applying applying Agile Development Methods and Scrum to differing environments and conditions. The choir is already converted, so this event is really about sharing war stories and attaining mastery.
People have been swearing for years that Agile Methodology delivers software faster and with fewer defects. An SRO keynote by Michael Mah of QSM Associates went beyond colorful anecdotes and provided quantitative results of going Agile for different organization types. He also compared the effectiveness of XP, Scrum, pair programming, etc. The long and short of it... Agile will help almost all organizations.
That keynote was titled "The Good, the Better, and the Rather Puzzling: The Agile Experience at Five Companies". Mah talked about the how old metrics can be reversed when Agile methods turn the "laws of software physics" upside down when applied successfully. Those predictable relationships between schedule pressure, staff ramp-up, and bug rates, all could be changed with Agile. His productivity findings at five mostly-Agile companies show a range of results for time-to-market, productivity, and quality. Each company did Agile their own way and all the metrics show process gains. They also show that new Agile approaches are moving to the mainstream. And the teams that best understood and practised Agile methods seemed to show the most gains - roughly double the output of teams just beginning Agile.
(Michael Mah is an author with the Cutter Consortium, and managing partner of QSM Associates, Inc., specializing in software measurement and project estimation. Michael has collected data on thousands of projects worldwide; his book-in-progress,
During his omnibus presentation, Mah referred to the work and suggestions of Agilist Mike Cohn, who delivered the first day morning keynote:
"I'm sure we all know this from experience, but try arguing with management. One of Mike Cohn's slides mentioned Kent Beck and the subject of sustainable overtime for software development teams. Although pure Agilists will say "no overtime ever", as an experienced consultant Cohn quoted one of Beck's "rules" for Agile teams: never work two consecutive weeks of overtime.
"This actually meets my experience from pre-Agile days and early Agile experiments: you can manage to get a little recovery if you choose to push yourself or your team extra hard in a single week, but Cohn showed data on a slide illustrating how productivity levels fall when teams violate this one week rule. Productivity only rose in the first overtime week - then fell more and more for every other week that overtime was imposed. Well-documented burnout!"
Tutorials ran from Lean Agile Management to Software Security, with Test-Driven Development and Design Principles, Risk Management, and even a unit on Behavior-Driven Development: "Behavior-driven development (BDD) is a new evolution in Agile software delivery. With its roots in test-driven development, domain-driven design, and automated acceptance testing, BDD enables teams to deliver valuable software more quickly. It puts the tester at the heart of the delivery process rather than in the usual position at the end."
Two days of tutorials ran at the beginning of the conference; there were full-day and half-day sessions. Regular conference sessions spanned the same topics in sessions of 90 minutes, so they were a bit more complete than at other conferences but almost half the length of half-day tutorials; so were some of the intros of related tutorials.
This link shows the archived Better Software 2008 conference site:
http://www.sqe.com/conferencearchive/bettersoftwareconf2008/Default.html
Unlike some more modern community software conferences (like Q-con, JavaOne, MySQL, OScon, etc.), the content at Better Software is, sadly, locked away. Some materials will appear as future articles at Better Software magazine, but the presentations are held at the StickyMinds.com Web site and access is based on conference attendance and/or the premium "Power Pass" that comes with a paid subscription to StickyMinds.com. The Power Pass grants access to all conference materials (for several conferences), magazine articles, and even search of on-line books, while the free registration offers more limited access.
The content at StickyMinds is great food for thought for software developers and software project managers, and even the free membership has significant content. An example of this is a then-current feature story in the Better Software August issue, "How to Fail with Agile" (by Clinton Keith and Mike Cohn) that actually was the basis for the first conference keynote, "Agile and the Seven Deadly Sins of Project Management". That presentation also mapped traditional "sins" like lust to feature crazes, and sloth to ignoring code quality. This is the direct link rather than clicking on the link for the past August feature: http://www.stickyminds.com/BetterSoftware/magazine.asp?fn=cifea&id=111 (valid in Sept 08)
The article lists twenty (somewhat tongue-in-cheek) guidelines to help you sabotage your Agile project. E.g., Guideline 8: "Do not create cross-functional teams - put all the testers on one team, all the programmers on another, and so on." Guideline 15: "Drop and customize important Agile practices before fully understanding them." We can guess how that one turns out...
For more great content, see Scott Ambler's detailed article on Agile Model-Driven Development from the June issue archive. There are also articles by many of the conference contributors on subjects related to their presentations and tutorials. E.g., Johanna Rothman's "Does Exploratory Testing Have A Place On Agile Teams?". Exploratory testing - questioning and learning about the product as you design and execute tests rather than slavishly following predefined scripts - makes sense for many projects. Rothman also thinks it helps in Agile projects.
The Better Software conference was held at The Venetian Hotel and Conference Center in Las Vegas, NV. It will return there June 8-11, 2009. That may not sound like a familiar venue, but it's the old Sands Convention Center redecorated from the earlier Comdex days. While Better Software took place on the upper 2 floors, Symantec Vision took place on the lower 3 floors. It's a large and workable conference venue, but you have to like going to and staying in Vegas - in the hotter part of June. When I was there, Elvis had left the building.
The two day developer conference took place in late May and almost 3000 attended. This is breaking into the big leagues for Google, especially with the major SF venue for tech conferences.
First, this was a very low cost event. The nominal charge was $400 for 2 days, including about 90 sessions, 2 lunches, and an evening reception. That's hard to beat, especially when the content is decent.
Of course, the Google Developer Day in May 2007 had been completely free and actually had better food (it's true; there was also more of it). But 'free' is hard to maintain and this conference was much broader and covered many topics not specific to Google. Although there were some sessions similar to the ones at the recent Web 2.0 and JavaOne, the content was decent and of a level comparable to other technical conferences.
Google recorded everything but the lab sessions (which should have been at least audio recorded since the many verbal comments were also valuable). These were supposed to be up on the Google I/O Web site in a week or two (and I did find them when I checked 6 weeks later.) So that $400 is a bargain - not to mention that you may get chatted up by a Google employee for a job. Of course, there was some Google marketing and an emphasis on Google products and initiatives, but this really took a back seat to the tech content. There were also sessions on broader topics like social networking APIs. That's a thumbs up for this event.
Staying with the technical emphasis, most of the keynotes were interesting - and this goes double for the GA announcement of the recent Google App Engine. That was a good overview that enticed attendees to go to the deep dive sessions later.
Here's a hit list of main announcements:
Session slides and videos can be found at: http://sites.google.com/site/io/
Follow that link to see sessions like: "Building an Android Application 101" or "Meet the OpenSocial Containers" or "Underneath the Covers at Google: Current Systems and Future Directions" (a peek at the architecture of Google's data centers.)
Consider starting with "Open Source Projects and Poisonous People" which tries to identify people that may poison the atmosphere in a developer community and how to de-fuse them. Also consider "Rapid Development with Python, Django, and Google App Engine" by Guido van Rossum (the creator of Python, who now works for Google).
And don't miss the sessions on Shindig such as: "Apache Shindig: Make your Social Site an OpenSocial Container." Shindig, a new Apache Software Foundation incubator project, is an open source implementation of both the OpenSocial specification and gadgets specification in Java and PHP. The goal of Shindig is to make it easier for social networking sites to extend functionality using the OpenSocial API.
Among other sessions I'd recommend is "Painless Python - part 1 & 2" by Alex Martellki, who wrote
The audience for this session is primarily programmers with no Python experience, but even sysadmins and Perl hackers could get a decent grounding in Python. This is useful since Google App Engine and some other tools are Python-based.
For people getting started with Python, he recommended on-line resources like http://python.org and http://diveintopython.org as a good start. He also mentioned Usenet's news://comp.lang.python and help@python.org.
Some points about Python: it is a very high level language with a very deep connection to C. According to Martellki, "Python is 87% in the spirit of C..." since it can use both OOP and procedural programming and supports operator overloading and multiple inheritance.
Like Java, Python is compiled into byte code. Everything inherits from the Object class, but in Python, everything is a first class object.
In the zen of Python, simple is better, complex better than complicated, flat better than nested. But, as Martellki emphasized, "There should be one, and preferably only one, obvious way to do it."
Python is strongly typed, but it is also dynamic. There are no declared types, no declarations, just statements. The standard Python library has over 200 modules, most purely Python, but some coded in C. Over 4000 additional modules can be found at: http://cheeseshop.python.org/pypi
Google App Engine was one of the highlights of the event. This is a Web-hosting service that lets 'customers' run applications on Google's infrastructure. For a small fee, your app can run in a Google datacenter. Apps are either run on your domain or on a free domain from http://appspot.com.
App Engine applications are implemented in Python. The runtime environment includes the full Python language and most of the Python standard library.
Google App Engine went live after the conference, which meant that over 100k developers and potential users that had been wait-listed during the restricted beta can now participate in the on-going beta. This time, Google lifted the use limits and also began to charge for serious use. When enrolling, all App Engine users get up to half a GB of storage and up to 5 million page views a month, free. That's probably enough for a small Web-based business. But success will have a price. The rates vary a bit but it's roughly 10 cents per CPU core hour and GB of bandwidth. Add 15-20 cents per GB per month of storage - about $2-3/year - and a serious Web enterprise might pay hundreds or thousands of dollars for 99.99% uptime. Does that make your data center into an expensive boat anchor?
Well, Google's reputation was tarnished by a major outage about a month after the conference but other Internet stars all had major and more frequent outages. It's still a bargain and significantly less expensive than Amazon's more mature cloud computing services. Also note that at present Google does not also charge "per-request" as Amazon does.
Google also announced and described new tools in the App Engine toolkit: a memcache utility for key-value caches comparable to memcache utilities in other scripting languages, and an API for simple image manipulation (to rotate, scale, and crop images). These are important because App Engine customers are not provisioning their own VMs with their own tools and utilities. App Engine is an automated application hosting environment. In contrast, Amazon offers separate computing resources like storage (S3), and virtual servers (EC2) which customers have to organize into Web applications piece by piece.
Find out more about App Engine here:
http://code.google.com/appengine/docs/whatisgoogleappengine.html
http://code.google.com/appengine/docs/python/
A demo video on using App Engine is available here: http://www.youtube.com/watch?v=bfgO-LXGpTM
App Engine is built in Python and uses the Django Web application
framework. Guido van Rossum described both in detail in a technical
session, "Rapid Development with Python, Django and Google App Engine":
(http://sites.google.com/site/io/rapid-development-with-python-django-and-google-app-engine)
App Engine works simply as an HTTP request handler; HTTP is like an RPC mechanism and supports Web applications and AJAX. Requests are handled serially or in parallel while App Engine handles app scaling and load balancing automatically.
Through Python APIs, App Engine provides for datastore, Google Accounts authentication, URL fetch and email services. Application code can only run in response to a Web request. A request handler cannot spawn a sub-process or execute code after the response has been sent.
App Engine apps must use App Engine's own DB facilities. This is a highly scalable data store based on Google's BigTable, which has as an advantage: no need for traditional SQL table joins.
There is a Django helper to convert DB models to App Engine use. See: http://code.google.com/p/google-app-engine-django/
Customer applications are basically directories containing Python and Django files (ie:/mayapp' runs from the myapp subdirectory). Symbolic links are followed. A single file controls the URL mapping and app meta-data; request handlers are invoked by URL patterns like CGI scripts, and output goes to stdout. See the slides for file defs and code samples.
Similar tracks at Google I/O covered Android, Google Gears, OpenSocial, Mapping, JavaScript, and other technologies. This is a broad technical conference featuring many Web 2.0 building blocks.
Kudos also for placing the conference in downtown SF. That's a better location for out-of-state visitors and reasonably accessible to all Bay Area locals. The mid-week timing of the conference also made it a bit easier to fly in and out of San Francisco.
Besides good grub and bins of freely available snack food, participants got a Google I/O T-shirt and a simple cloth tote bag. Since with Google, it's all net, there were no developer CDs and the only hardware were those Google lapel pins also available at other tech conferences.
One minor fiasco occurred on opening day. Google decided to keep the I/O conference registration open right up to keynote hour. That led to huge lines in the hour before the opening keynote. I waited over an hour for a badge and just as I trudged off late to the keynote, a Goggler stood up on a registration desk and shouted that everybody could go in for the morning sessions - sans badges - and complete registrations in the early afternoon. Then the escalators went into overload... At Google I/O, we didn't need no stinking badges!
Talkback: Discuss this article with The Answer Gang
Howard Dyckoff is a long term IT professional with primary experience at
Fortune 100 and 200 firms. Before his IT career, he worked for Aviation
Week and Space Technology magazine and before that used to edit SkyCom, a
newsletter for astronomers and rocketeers. He hails from the Republic of
Brooklyn [and Polytechnic Institute] and now, after several trips to
Himalayan mountain tops, resides in the SF Bay Area with a large book
collection and several pet rocks.
Howard maintains the Technology-Events blog at
blogspot.com from which he contributes the Events listing for Linux
Gazette. Visit the blog to preview some of the next month's NewsBytes
Events.
This article is meant to be a primer to the HugeTLB feature of the Linux kernel, which enables one to use virtual memory pages of large sizes. First, we will go through an introduction of large page support in the kernel, then we will see how to enable large pages and how to use large pages from the application. Finally, we will look into the internals of the large page support in the Linux kernel.
We will be using terms such as "huge pages", "large pages", "HugeTLB", etc. interchangeably in this article. This article covers large page support for x86 based architecture, although most of it is directly applicable to other architectures.
From a memory management perspective, the entire physical memory is divided into "frames" and the virtual memory is divided into "pages". The memory management unit performs a translation of virtual memory address to physical memory address. The information regarding which virtual memory page maps to which physical frame is kept in a data structure called the "Page Table". Page table lookups are costly. In order to avoid performance hits due to this lookup, a fast lookup cache called Translation Lookaside Buffer(TLB) is maintained by most architectures. This lookup cache contains the virtual memory address to physical memory address mapping. So any virtual memory address which requires translation to the physical memory address is first compared with the translation lookaside buffer for a valid mapping. When a valid address translation is not present in the TLB, it is called a "TLB miss". If a TLB miss occurs, the memory management unit will have to refer to the page tables to get the translation. This brings additional performance costs, hence it is important that we try to reduce the TLB misses.
On normal configurations of x86 based machines, the page size is 4K, but the hardware offers support for pages which are larger in size. For example, on x86 32-bit machines (Pentiums and later) there is support for 2Mb and 4Mb pages. Other architectures such as IA64 support multiple page sizes. In the past Linux did not support large pages, but with the advent of HugeTLB feature in the Linux kernel, applications can now benefit from large pages. By using large pages, the TLB misses are reduced. This is because when the page size is large, a single TLB entry can span a larger memory area. Applications which have heavy memory demands such as database applications, HPC applications, etc. can potentially benefit from this.
Support for large pages can be included into the Linux kernel by choosing CONFIG_HUGETLB_PAGE and CONFIG_HUGETLBFS during kernel configuration. On a machine which has HugeTLB enabled in the kernel, information about the Hugepages can be seen from the /proc/meminfo. The following is an example taken from an AMD Semptron laptop, running kernel 2.6.20.7 with HugeTLB enabled. The information about large pages is contained in entries starting with string "Huge".
#cat /proc/meminfo | grep Huge HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 4096 kB
We have to tell the kernel the number of large pages that needs to be reserved for usage. An echo of the number of large pages to be reserved, to the nr_hugepages proc sys entry. In the following example, we reserve a maximum of 4 large pages:
#echo 4 > /proc/sys/vm/nr_hugepages
Now the kernel will have allocated the necessary large pages (depending on the availability of memory). We can once again see the /proc/meminfo and confirm that the kernel has indeed allocated the large pages.
#cat /proc/meminfo | grep Huge HugePages_Total: 4 HugePages_Free: 4 HugePages_Rsvd: 0 Hugepagesize: 4096 kB
We can also enable the HugeTLB pages by giving "hugepages=" parameter at kernel boot. Also we can use 'sysctl' to set the number of large pages.
An application can make use of large pages in two ways. One is by using a special shared memory region and another is by mmaping files from the hugetlb filesystem. Especially if we want to use private HugeTLB mapping, then mmaping files from hugetlb technique is recommended. In this article we will concentrate on the large page support via shared memory. We will see here how we can use an array which is mapped into large pages from an application.
#include <stdio.h> #include <sys/ipc.h> #include <sys/shm.h> #include <stdlib.h> #define MB_1 (1024*1024) #define MB_8 (8*MB_1) char *a; int shmid1; void init_hugetlb_seg() { shmid1 = shmget(2, MB_8, SHM_HUGETLB | IPC_CREAT | SHM_R | SHM_W); if ( shmid1 < 0 ) { perror("shmget"); exit(1); } printf("HugeTLB shmid: 0x%x\n", shmid1); a = shmat(shmid1, 0, 0); if (a == (char *)-1) { perror("Shared memory attach failure"); shmctl(shmid1, IPC_RMID, NULL); exit(2); } } void wr_to_array() { int i; for( i=0 ; i<MB_8 ; i++) { a[i] = 'A'; } } void rd_from_array() { int i, count = 0; for( i=0 ; i<MB_8 ; i++) if (a[i] == 'A') count++; if (count==i) printf("HugeTLB read success :-)\n"); else printf("HugeTLB read failed :-(\n"); } int main(int argc, char *argv[]) { init_hugetlb_seg(); printf("HugeTLB memory segment initialized !\n"); printf("Press any key to write to memory area\n"); getchar(); wr_to_array(); printf("Press any key to rd from memory area\n"); getchar(); rd_from_array(); shmctl(shmid1, IPC_RMID, NULL); return 0; }
The above program is just like any other program which uses shared memory. First, we initialize the shared memory segment with an additional flag SHM_HUGETLB for getting large page-based shared memory. Then we attach the shared memory segment to the program. Following this, we write to the shared memory area in the function call 'wr_to_array'. And finally we verify whether the data has been written properly by reading back the data in the function 'rd_from_array'.
Now let us compile the program and run it.
#cc hugetlb-array.c -o hugetlb-array -Wall #./hugetlb-array HugeTLB shmid: 0x40000 HugeTLB memory segment initialized ! Press any key to write to memory area
At this point in time if we check the status of the HugeTLB pages in the /proc/meminfo, it will show that 2 pages, i.e. 8MB of memory area are reserved. All the large pages will still be shown as free, as we have not yet started using the memory area.
#cat /proc/meminfo | grep Huge HugePages_Total: 4 HugePages_Free: 4 HugePages_Rsvd: 2 Hugepagesize: 4096 kB
Press key at the program input, which will result in the writing to the allocated HugeTLB memory location. Now the memory segment which was allocated will be used. This will move the 2 large pages to allocated state. We can see this in the /proc/meminfo as HugePages_Free shows only 2.
#cat /proc/meminfo | grep Huge HugePages_Total: 4 HugePages_Free: 2 HugePages_Rsvd: 0 Hugepagesize: 4096 kB
The following message will appear now
Press any key to rd from memory area
Finally when we press a key at the program input, the program will check whether the data which was written is indeed present in the HugeTLB area. If everything goes fine we will get a hugetlb smiley.
HugeTLB read success :-)
Inside the Linux kernel, large page support is implemented in two parts. The first part consists of a global pool of large pages which are allocated and kept reserved for providing large pages support to applications. The global pool of large pages is built by allocating physically contiguous pages (of large page sizes) using normal kernel memory allocation APIs. Second part consists of the kernel itself allocating large pages from this pool to applications that request them.
We will first see the internals of how the large pages are initialized and how the global pools are filled up. Then we will see how shared memory can be used by application to leverage the large pages and how the physical pages actually get allocated by means of page fault. We will not perform a line-to-line code walk through; instead we will go through the main parts of the code relevant to large pages.
In the Linux kernel source code (in file mm/hugetlb.c) we have the function "hugetlb_init" which allocates multiple physically contiguous pages of normal page size to form clusters of pages which can be used for large page sizes. The number of pages which are allocated like this depends on the value of "max_huge_pages" variable. This number can be passed on as a kernel command line option by using the 'hugepages' parameter. The large page size allocated depends on the macro HUGETLB_PAGE_ORDER which in turn depends on HPAGE_SHIFT macro. For example this macro is assigned the value 22 (when PAE in not enabled) on an x86 based architecture. This means that the size of large page allocated will be 4Mb. Note that the large page size depends on architecture and corresponding supported page sizes.
The pages allocated as mentioned previously are enqueued into "hugepage_freelists" for the respective node, where the page is allocated from, by the function 'enqueue_huge_page'. Each memory node (in case of NUMA) will have one hugepage_freelists. When the large pages are allocated dynamically as in the example (by echoing the value to proc) or by other dynamic methods, a similar sequence of events occurs, as explained during the static allocation of large pages.
In order to use a shared memory area, we will have to create it. This, as we have seen before, is done by the 'shmget' system call. This system call will invoke the kernel function 'sys_shmget' which in turn calls 'newseg'. In 'newseg' a check is made to confirm if the user has asked for the creation of a HugeTLB shared memory area. If the user has specified the large page flag SHM_HUGETLB, then the file operations corresponding to this file structure will be assigned to 'hugetlbfs_file_operations'. The large pages gets reserved by the function 'hugetlb_reserve_pages' which will increment the reserve pages count - resv_huge_pages which shows up as 'HugePages_Rsvd'in the proc.
When the system call 'sys_shmat' is made, address alignment check and other sanity checks are done by using 'hugetlb_get_unmapped_area' function.
When a page fault occurs, the "vma" which corresponds to the address is found. The vma which corresponds to a hugetlb shared memory location will have 'vma-> vm_flags' set as 'VM_HUGETLB', and is detected by calling 'is_vm_hugetlb_page'. When a hugetlb vma is found the 'hugetlb_fault' function is called. This procedure sets up large page flag in the page directory entry then allocates a huge page based on a copy-on-write logic from the global pool of large pages initialized previously. The large page size itself is set in the hardware by setting the _PAGE_PSE flag in the pgd(the 7th bit, starting from 0th bit, in cases without PAE for x86).
Detailed documentation with advanced examples can be found in the file Documentation/vm/hugetlbpage.txt which comes with Linux kernel source code.
The HugeTLB feature inside the kernel is not application transparent, in the sense that we need to explicitly make modifications (i.e. have to insert code which uses shared memory or HugeTLB fs) to the application to make use of large pages. For folks who are interested in application transparent implementations of large page support, an internet search for "Transparent superpages" will get you to Web sites containing details of such implementations.
We have seen how the Linux kernel provides applications with the ability to use large pages. We went through methods to enable and use large pages. After that we skimmed through the internals of the HugeTLB implementation inside the kernel.
I would like to extend my sincere thanks to Kenneth Chen for giving me better insights into HugeTLB code, for answering my questions with patience and for the review of an initial draft of this article. I would also like to thank Pramode Sir, Badri, Malay, Shijesta and Chitkala for review and feedback.
Talkback: Discuss this article with The Answer Gang
Krishnakumar loves to play around with the linux kernel source. He
works for the Scalable Computing & Infrastructure organization of
Hewlett-Packard. His research interests include High Performance
Computing(HPC), Parallel Filesystems, Virtualization, and High
Performance Interconnects. He received a B.Tech in Computer Science
and Engineering from Govt. Engg. College Thrissur.
By Rick Moen
BKUOSFSL.RVW 20081029
%A Andrew M. St. Laurent %D August 2004 %G 0-596-00581-4 %I O'Reilly Media %O "Understanding Open Source and Free Software Licensing" %P 224 pages %T Understanding Open Source and Free Software Licensing
Andrew St. Laurent's 2004 volume
There's a vital need for such books because businesses have been dipping their toes into open source for the past decade and immediately stumbling over legal issues, both real and imaginary. As a longtime observer of the resulting fray, I've been longing for a good book on the subject.
St. Laurent's comes within spitting distance of being that book. As an attorney with an interest in intellectual property law, he is able to give a lucid run-through of about a dozen of the most common open source licences, explaining what each clause means in detail, and contrasting them with a typical proprietary-software licence, similarly scrutinised. That part of the book, which comprises the book's middle 70%, will be useful for people wondering how licences get their force, and how they operate in our legal framework. The book falls down in a couple of places. First and foremost, it lacks a coherent conceptual overview, failing to clarify the default rights conveyed by unaided copyright law and its provisions. It takes for granted crucial concepts such as "derivative work": if work A is derivative of work B, then the second work's owner's rights (and licensing) determine what can be done with (encumber) the derivative. The advantages of registering copyrights (and consequences of not doing so), revocation of licences by the licensor, collective works versus joint works, and many other details important to licensing get no coverage at all. St. Laurent also doesn't mention at all a key aspect of licences, that the copyright holder attaches them to instances of a codebase, such that different instances may bear completely different terms of usage.
Some of the very common licensing controversies within the open source community aren't addressed, either: Is it necessary or desirable to require a licensee to indicate assent, e.g., through a clickwrap agreement mechanism? (St. Laurent states without reservation that clickwrap licences have been ruled enforceable, but the judicial record on the matter is actually mixed.) How extensive should the reach of licences' patent-defence clauses (if any) be? (Some licences revoke the rights of users who bring any sort of patent action, regardless of the dispute; others are limited in scope to just patent actions concerning the licensed work.) Is it possible to donate a work of original ownership directly to the public domain, despite the lack of any legal mechanism for doing so? Is it desirable to have a choice-of-law provision in one's licence? Can you, as the primary maintainer and copyright holder of a collective-work project, "upgrade" the project's licence to a better one? These are important questions, with which the book simply won't help you.
I was disappointed that St. Laurent takes for granted that the GNU General Public Licence and Lesser General Public Licence can be evaluated only as contracts; their author, law professor Eben Moglen, has clarified many times that both contracts are intended as rights grants under copyright law only, and the licences themselves clearly so state. Thus, the question of their enforceability doesn't hinge on contract formation -- but even there, St. Laurent's coverage is lacking: given that GPL and LGPL would be unilateral contracts without any necessary obligation of payment by the licensee, how would the necessary contract element of "valuable consideration" be found? (Each side in a contract must give up something of value; otherwise, there can be no contract.) A more-thorough treatment would have discussed that issue and also the licences' intended status as "bare copyright licences".
These flaws notwithstanding, the book does include excellent, reasonably readable yet in-depth analysis of all primary open source licences in use today, and I do strongly recommend it to all interested audiences.
Talkback: Discuss this article with The Answer Gang
Rick has run freely-redistributable Unixen since 1992, having been roped
in by first 386BSD, then Linux. Having found that either one
sucked less, he blew
away his last non-Unix box (OS/2 Warp) in 1996. He specialises in clue
acquisition and delivery (documentation & training), system
administration, security, WAN/LAN design and administration, and
support. He helped plan the LINC Expo (which evolved into the first
LinuxWorld Conference and Expo, in San Jose), Windows Refund Day, and
several other rabble-rousing Linux community events in the San Francisco
Bay Area. He's written and edited for IDG/LinuxWorld, SSC, and the
USENIX Association; and spoken at LinuxWorld Conference and Expo and
numerous user groups.
His first computer was his dad's slide rule, followed by visitor access
to a card-walloping IBM mainframe at Stanford (1969). A glutton for
punishment, he then moved on (during high school, 1970s) to early HP
timeshared systems, People's Computer Company's PDP8s, and various
of those they'll-never-fly-Orville microcomputers at the storied
Homebrew Computer Club -- then more Big Blue computing horrors at
college alleviated by bits of primeval BSD during UC Berkeley summer
sessions, and so on. He's thus better qualified than most, to know just
how much better off we are now.
When not playing Silicon Valley dot-com roulette, he enjoys
long-distance bicycling, helping run science fiction conventions, and
concentrating on becoming an uncarved block.
By Joey Prestia
In a Windows networked environment, you can go to any machine, log in to the system, and have access to your home directories and files on the server. This seems like a convenience until you realize that in doing this, the server and the client machines are eating up resources by keeping all network shares hard-mounted at all times. The Linux solution to this is the autofs daemon known as the automounter.
In the past, Linux networks would have various types of Network File System (NFS) servers being utilized in different ways: some for network backup, some exporting users' Network Information Service (NIS) mapped directories, and some exporting other network resources. One of the problems was that booting a system that used traditional fstab-based mounts while the NFS server was down could be a long process: it would take a long time to come up as it waited for the server to time out. Similarly, when you have two servers, each mounting directory hierarchies from the other, and both systems are down, both may hang as they are brought up (this is called server-server dependency). The automount facility gets around these issues by mounting a directory hierarchy from another system only when a process tries to access it. [1] If a client attempts a hard mount and the server is unavailable, then the Linux kernel treats the process as it would if a local drive's file system was unavailable. The user can do nothing until the server becomes available; the process is in an un-interruptible sleep until the server is back on line. Often, the process can't even be killed, and a "ps aux" command will reveal a 'D' state.
[ For anyone not familiar with "ps" output, 'D' means "Uninterruptible sleep (usually IO)". For more information, please see the "PROCESS STATE CODES" section in "man ps". -- Ben ]
Automounting is the process where the mounting and unmounting of file systems is done automatically by the autofs service. If the filesystem is unmounted, and a user attempts to access it, it will be automatically mounted or re-mounted. Using the automounter saves on resources by mounting a directory only when you need to enter it, then unmounting it automatically when you don't need it anymore. This is usually done after a default timeout (usually 300 seconds) that is specified in the /etc/sysconfig/autofs file, the main configuration file for this service. The other configuration files are located in the /etc directory and are called auto.master, auto.misc, auto.net, and auto.smb. The last two are executable scripts.
Let's say we have some NFS directories and our company's employees are only trained in use of the GNOME desktop environment; they could use the automounter if you were to set up the configuration files and copy them to their machines. They would be able to point and click their way around, and the remote file systems would do the right thing, invisibly. Using the automounter is not restricted to the console.
Procedure:
Let's check out the configuration files. (Note: this guide was written using RHEL 5.1, so other distributions may differ slightly in some aspects.) The main configuration file, "/etc/sysconfig/autofs", has the timeout for mounts defined as 300 seconds (or five minutes) of inactivity, and the default browse mode set to "no"; this prevents shares that are not being directly referenced from being automounted. There are lots of other configurable options for different setups, so it is worth your time to read through this file just so you know what is in there. Look particularly closely at any uncommented directives.
DEFAULT_TIMEOUT=300 DEFAULT_BROWSE_MODE="no"
'auto.master', contains three uncommented lines:
/misc /etc/auto.misc /net -hosts +auto.master
The first line means that the configured devices in auto.misc will be mounted in the "/misc" directory; the second one means that the exports from the network hosts will be mounted in "/net". '+auto.master' is a reference to an external NIS or NIS+ master map. Entries in this file consist of a mount point followed by the path to a file which defines the details and the options of what will be mounted where. You can call the files anything you want, or whatever suits the need. The following entries would be valid:
/remote /etc/auto.somewhere_else /mnt/nas /etc/auto.nas /mnt/backup /etc/auto.backup
We would then create the file that we referenced in the auto.master and define the mount details in it.
It is also useful to consider mounting the users' remote home directories in a non-standard location (e.g., "/rhome/username" or "/export/home/username".) This would be considered a violation of the FHS (Filesystem Hierarchy Standard), since the mount point is located in the root of the filesystem; on the other hand, it's only a temporary directory, created and destroyed by the autofs daemon as necessary. The advantages are obvious: you get to have access to the content of your "/home/username" directory as well as the remote one, and you get to keep the use of the /mnt mount point for other devices or shares.
This is the file that you create for your custom automounts. The auto.files would have three columns in them:
key mount options location or device
The "key" is the subdirectory under the mount point that you listed in the auto.master file. As an example, if in our auto.master we had the following:
/misc /etc/auto.misc /net -hosts /mnt/remote /etc/auto.remote +auto.master
We would then create, for example, the /etc/auto.remote file with our desired options:
# key mount options location or device records -rw,soft,intr server1.example.com:/srv/nfs/medical/records
Once we restart the autofs service, changing to the /mnt/remote/records directory will create the demand for the exported resource server1.example.com:/srv/nfs/medical/records to be mounted on /mnt/remote/records. It is important to note that if you use host names (rather than IPs) and don't have DNS available, you will need an entry in /etc/hosts that maps the hostname to the IP address.
The following example uses wildcard substitution. Placing a asterisk in the key field and an ampersand at the end of the location or device path field will map the end of the location path (directory structure and files) to whatever is found at the location. In the example below, the entire remote home directory structure located on server1.example.com:/home would be recreated when the user attempts to log in. By using this technique, a user could log into any machine and have their home directory mapped appropriately.
# key mount options location or device * -rw,soft,intr server1.example.com:/home/&
The /etc/auto.misc file contains a lot of sample configurations for automounting various types of devices; to use one of these, merely uncomment a specific line in this file and adjust the appropriate device accordingly. But just as different machines have different hardware configurations, different distros may have different setups so a little customization will probably be necessary.
# This is an automounter map and it has the following format # key [ -mount-options-separated-by-comma ] location # Details may be found in the autofs(5) manpage cd -fstype=iso9660,ro,nosuid,nodev :/dev/cdrom # the following entries are samples to pique your imagination #linux -ro,soft,intr ftp.example.org:/pub/linux #boot -fstype=ext2 :/dev/hda1 #floppy -fstype=auto :/dev/fd0 #floppy -fstype=ext2 :/dev/fd0 #e2floppy -fstype=ext2 :/dev/fd0 #jaz -fstype=ext2 :/dev/sdc1 #removable -fstype=ext2 :/dev/hdd
Most distros now automatically mount USB devices under the /media directory and several of the devices listed above are fading away - but there is still some use for this file. It can be configured in a multitude of ways, e.g. to automount partitions that you don't need to have mounted at all times (backup devices are just one example). A good way to get familiar with this file is to come up with a configuration scenario and put it into practice. For example, try mounting a USB flash drive with the automounter and setting up different configurations until you get familiar with it. Remember to always back up the original file before making any changes.
Suppose we want to be able to automount the NFS share on the network server
instead of, say, manually typing "mount -t nfs
192.168.0.254:/var/ftp/pub/directory/subdirectory /mnt"
.) In this case, we
would just need the hostname of the machine with the exported filesystem
(if there is an entry in DNS) or the IP address of the machine. The
"showmount -e hostname" or "showmount -e IP_address" command can be used
to discover what directories a host has exported by NFS. Example:
"showmount -e 192.168.0.254" will display the exports for that host.
The auto.net file is an executable script which searches for available NFS shares to mount; the NFS service must be running for the auto.net script to be able to connect to remote shares. To invoke this ability, you need to ensure that the autofs and NFS services are running:
[root@station17 ~]# service autofs status automount is stopped [root@station17 ~]# service nfs status rpc.mountd is stopped nfsd is stopped rpc.rquotad is stopped
Since they are stopped, we start the service with the "service" command, and ensure that they stay on through a reboot by using the "chkconfig" command on Red Hat based systems.
[root@station17 ~]# service autofs start Starting automount: [ OK ] [root@station17 ~]# chkconfig --level 35 autofs on [root@station17 ~]# service nfs start Starting NFS services: [ OK ] Starting NFS quotas: [ OK ] Starting NFS daemon: [ OK ] Starting NFS mountd: [ OK ] [root@station17 ~]# chkconfig --level 35 nfs on [root@station17 ~]#
The "chkconfig" arguments just ensure that this service will remain on through a reboot in runlevels 3 and 5. Now we can see if any network server exports are available. First, we'll go to /net; assuming that our server with the NFS export is "server1" and has a DNS entry, we could execute the following:
[root@station17 /]# cd /net [root@station17 net]# cd server1 [root@station17 server1]# ls data documents powervault rhome var [root@station17 server1]#
or, if we know the IP address of the machine, this would work:
[root@station17 net]# cd 192.168.0.254 [root@station17 192.168.0.254]# ls data documents powervault remote var [root@station17 192.168.0.254]#
The directory magically appears - but if we didn't know the machine's hostname or IP address, we would not be able to find it. The automounter does not list directories unless they have been accessed, so unless a user knows what to access they will not be able to blindly find it. This sure makes mounting NFS shares a lot easier - and you don't have to remember to manually unmount them!
As an exercise to see this work on our machine, let's execute the following commands.
[root@station17 ~]# echo "/var/www/html 127.0.0.1(sync,rw)" >> /etc/exports [root@station17 ~]# service nfs restart Shutting down NFS mountd: [ OK ] Shutting down NFS daemon: [ OK ] Shutting down NFS quotas: [ OK ] Shutting down NFS services: [ OK ] Starting NFS services: [ OK ] Starting NFS quotas: [ OK ] Starting NFS daemon: [ OK ] Starting NFS mountd: [ OK ] [root@station17 ~]# cd /net/localhost [root@station17 localhost]# ls var [root@station17 localhost]#
We've exported our /var/www/html directory as a read/write NFS export to our loopback IP address just to try this (in case you're not in a networked environment). Don't forget to remove the line in /etc/exports that we echoed into the file when you are done checking out the network mounting capabilities. Do note that typing "mount" at the command line will not show what is mounted by the automounter: you will have to execute a "cat /proc/mounts" to see what is currently mounted by the autofs daemon.
The default auto.smb file is also an executable script, and the use of it may require the Samba servername/username/password, or that a credentials file be specified for your particular setup. You can also use the guide included with your Samba documentation, as I did, to create your own auto.smb file; it follows the same format as the auto.misc file. If you want to automount Samba shares, you may want to just go ahead and create your own configuration file customized for your own needs. Just remember to back up the original file if you're not going to use the executable.
# automount points below /smb # This is an automounter map and it has the following format # key [ -mount-options-separated-by-comma ] location # Details may be found in the autofs(5) manpage # smb-servers supra_andreas -fstype=smb,username=andreas,password=foo ://supra/aheinrich supra_cspiel -fstype=smb,username=cspiel ://supra/cspiel phonon_andreas -fstype=smb,username=andreas ://phonon/andreas helium_cspiel -fstype=smb,username=cspiel ://helium/cspiel
The automounter will work with NFS exports, Samba, NIS, NIS+, LDAP, and various other devices, not to mention being scriptable and very configurable. In this article, I've just touched on the basics very briefly; don't forget to read the autofs(5) man page, since it has some interesting concepts. Also, be sure to consult it when setting up automounting, as well as any other distro-specific documentation when configuring the automounter to meet your needs.
[1] Taken from Mark G. Sobell's book "A Practical Guide to Red Hat Linux" Third Edition, page 691
Talkback: Discuss this article with The Answer Gang
Joey was born in Phoenix and started programming at the age fourteen on a Timex Sinclair 1000. He was driven by hopes he might be able to do something with this early model computer. He soon became proficient in the BASIC and Assembly programming languages. Joey became a programmer in 1990 and added COBOL, Fortran, and Pascal to his repertoire of programming languages. Since then has become obsessed with just about every aspect of computer science. He became enlightened and discovered RedHat Linux in 2002 when someone gave him RedHat version six. This started off a new passion centered around Linux. Currently Joey is completing his degree in Linux Networking and working on campus for the college's RedHat Academy in Arizona. He is also on the staff of the Linux Gazette as the Mirror Coordinator.
These images are scaled down to minimize horizontal scrolling.
Flash problems? All HelpDex cartoons are at Shane's web site,
www.shanecollinge.com.
Talkback: Discuss this article with The Answer Gang
Part computer programmer, part cartoonist, part Mars Bar. At night, he runs
around in his brightly-coloured underwear fighting criminals. During the
day... well, he just runs around in his brightly-coloured underwear. He
eats when he's hungry and sleeps when he's sleepy.
The Ecol comic strip is written for escomposlinux.org (ECOL), the web site that supports es.comp.os.linux, the Spanish USENET newsgroup for Linux. The strips are drawn in Spanish and then translated to English by the author.
These images are scaled down to minimize horizontal scrolling.
All Ecol cartoons are at tira.escomposlinux.org (Spanish), comic.escomposlinux.org (English) and http://tira.puntbarra.com/ (Catalan). The Catalan version is translated by the people who run the site; only a few episodes are currently available.These cartoons are copyright Javier Malonda. They may be copied, linked or distributed by any means. However, you may not distribute modifications. If you link to a cartoon, please notify Javier, who would appreciate hearing from you.
Talkback: Discuss this article with The Answer Gang
More XKCD cartoons can be found here.
Talkback: Discuss this article with The Answer Gang
I'm just this guy, you know? I'm a CNU graduate with a degree in physics. Before starting xkcd, I worked on robots at NASA's Langley Research Center in Virginia. As of June 2007 I live in Massachusetts. In my spare time I climb things, open strange doors, and go to goth clubs dressed as a frat guy so I can stand around and look terribly uncomfortable. At frat parties I do the same thing, but the other way around.
Jimmy O'Regan [joregan at gmail.com]
Knowing there are at least a couple of Heinlein readers here, some of you might be interested in this Slashdot story: http://entertainment.slashdot.org/article.pl?sid=08/09/10/0216202&from=rss (pointing to this: http://kk.org/ct2/2008/09/heinleins-fan-mail-solution.php)
"In the days before the internet, Heinlein's solution was to create a list of frequently asked questions, answer them, and remove the questions. Then he, or rather his wife Ginny, checked off the appropriate answer(s) and mailed it back. Some of the entries in Heinlein's answer sheet are quite illuminating and amusing. Our personal favorite: 'You say that you have enjoyed my stories for years. Why did you wait until you disliked one story before writing to me?'"
[ Thread continues here (11 messages/15.46kB) ]
Rick Moen [rick at linuxmafia.com]
[Jimmy was kind enough to mail me, out of the blue, commentary on an essay I wrote recently, http://linuxmafia.com/faq/Essays/marriage.html .]
Quoting Jimmy O'Regan (joregan@gmail.com):
> 2008/9/1 Rick Moen <rick@linuxmafia.com>: > > Quoting Jimmy O'Regan (joregan@gmail.com): > > > >> I just read your fascinating essay 'Kudzu and the California Marriage > >> Amendment'; one niggle, however: 'Stanlislawa Walasiewiczowna' should > >> be spelled 'Stanislawa Walasiewiczowna' (or, with full Polish > >> diacritics, 'Stanis?awa Walasiewiczówna') > > > > Thank you, sir. I was copying a third-party source, and it didn't look > > quite right, but I forgot I had a Polish-speaker I could check with. ;-> > > > > I actually found your essay while trying to take a break from Polish - > I'm currently wading my way through 'Pan Tadeusz', first in English, > then in Polish, proofreading the scans as I go to possibly send to > Project Gutenberg. It thus came as something of a minor shock to find > yet more Polish in your essay - and I was already in 'error checking > mode', so...
It's good to hear that the tragic historical trainwreck caused first by the "liberum veto", and then by Poland being in harm's way and attempting to be a liberal parliamentary democracy midway between three of the most powerful of Europe's autocratic dynasties (Catherine the Great, Frederick II and Frederick William II of the Hohenzollerns, and several Austrian emperors) produced something good.
Anyway, one of the reasons I hastened to finish that essay is that my family is going on holiday, starting tomorrow, on a cruise ship. We're going to be visiting London, Copenhagen, Berlin, Talinn, St. Petersburg, Helsinki, Stockholm, Oslo, and London again, and arriving back on Sept. 19th.
My apologies for not having had time to help copyedit the September _Linux Gazette_: I've been busy getting ready to leave.
[ Thread continues here (10 messages/16.96kB) ]