Grand Designs

Posted on 31st December 2013

Over the last year I've made several releases for Labyrinth and its various plugins. Some have been minor improvements, while others have been major improvements as I've reviewed the code for various projects. I originally wrote Labyrinth after being made redundant back in December 2002, and after realising all mistakes I made with the design of its predecessor, Mephisto. In the last 11 years has helped me secure jobs, enabled me to implement numerous OpenSource projects (CPAN Testers and YAPC Conference Surveys to name just two) and provided the foundation to create several websites for friends and family. It has been a great project to work on, as I've learnt alot about Perl, AJAX/JSON, Payment APIs, Security, Selenium and many other aspects of web development.

I did a talk about Labyrinth in Frankfurt for YAPC::Europe 2011, and one question I was asked, was about comparing Labyrinth to Catalyst. When I created Labyrinth, Catalyst and its predecessor Maypole, were 2 years (and 1 year) away from release. Back then I no idea about an MVC, but I was pleased that in later years when I was introduced to the design concept, that it had seemed an obvious and natural way to design a web framework. Aside from this and both being written in Perl, Labyrinth and Catalyst are very different beasts. If you're looking for a web framework to design a mojor system for your company, then Catalyst is perhaps the better choice. Catalyst also has a much bigger community, whereas Labyrinth is essentially just me. I'd love for Labyrinth to get more usage and exposure, but for the time being, I'm quite comfortable with it being the quiet machine behind CPAN Testers, YAPC Surveys, and all the other commercial and non-commercial sites I've worked on over the years.

This year I finally released the code to enable Labyrinth to run under PSGI and Plack. It was much easier than I thought, and enabled me to better understand the concepts behind the PSGI protocol. There are several other concepts in web development that are emerging, and I'm hoping to allow Labyrinth to teach me some of them. However, I suspect most of my major work with Labyrinth in 2014 is going to be centred on some of the projects I'm currently involved with.

The first is the CPAN Testers Admin site. This has been a long time coming, and is very close to release. There are some backend fixes that are still needed to join the different sites together, but the site itself is mostly done. It still needs testing, but it'll be another Labyrinth site to join the other 4 in the CPAN Testers family. The site has taken a long time to develop, not least because of various other changes to CPAN Testers that have happened over the few years, and the focus on getting the reports online sooner rather than later.

The next major Labyrinth project I plan to work on during 2014, is the YAPC Conference Surveys. Firstly to release the current code base and language packs, to enable others to develop their own survey sites, as that has been long over due. Secondly, I want to integrate the YAPC surveys into the Act software tool, so that promoting surveys for YAPCs and Perl Workshops will be much easier, and we won't have to rely on people remembering their keycode login. Many people have told me after various events that they never received the email to login to the surveys. Some have later been found in spam folders, but some have changed their email address and the one stored in Act is no longer valid. Allowing Act to request survey links will enable attendees to simply log into the conference site and click a link. Further to this, if the conference has surveys enabled, then I'd like the Act site to be able to provide links next to each talk, so that talk evaluations can be donme much more easily.

Lastly, I finally want to get all the raw data online as possible. I still have the archives of all the surveys that have been undertaken, and some time ago I wrote a script to create a data file, combining both the survey questions and the responses, appropriately anonymised, with related questions linked, so that others can evaluate the results and provide even more statistical analysis than I currently provide.

In the meantime the next notable release from Labyrinth will be a redesign of the permissions system. From the very beginning Labyrinth had a permissions system, which for many of the websites was adequate. However, the original Mephisto project encompassed a permissions system for the tools it used, which for Labyrinth were redesigned as plugins. Currently a user has a level of permission; Reader, Editor, Publisher, Admin and Master. Each level grants more access than the previous one as you might expect. Users can also be assigned to groups, which also have permissions. It is quite simplistic, but as most of the sites I've developed only have a few users, granting these permissions across the whole site has been perfectly acceptable.

However, with a project I'm currently working on this isn't enough. Each plugin, and its level of functionality (View, Edit, Delete), need different permissions for different users and/or groups. The permissions system employed by Mephisto came close, but they aren't suitable for the current project. A brainwave over Christmas saw a better way to do this, and not just to implement for the current project, but to improve and simplify the current permission system, and enable to plugins to set their permissions in data or configuration rather than code, which is a key part of the design of Labyrinth.

This ability to control via data is a key element of how Labyrinth was designed, and it isn't just about your data model. In Catalyst and other web frameworks, the dispatch table is hardcoded. At the time we designed Mephisto, CGI::Application was the most prominent web framework, and this hardcoding was something that just seemed wrong. If you need to change the route through your request at short notice, you shouldn't have to recode your application and make another release. With Labyrinth switching templates, actions and code paths is done via configuration files. Changing can be dne in seconds. Admittedly it isn't something I've needed to do very often, but it has been necessary from time to time, such as disabling functionality due to broken 3rd party APIs, or switching templates for different promotions.

The permission system needs to be exactly the same. A set of permissions for one site may be entirely different for another. Taking this further, the brainwave encompassed the idea of profiles. Similar to groups, a profile can establish a set of generic permissions. Specific permissions can then be adjusted as required, and reset via a profile on a per user or per group basis. This then allows the site permissions to be tailored for a specific user. This then allows UserA and UserB to have generic Reader access, but for UserA to have Editor access to TaskA and UserB to be granted Editor access to TaskB. Previously the permission system would have meant both users be granted Editor access for the whole site. Now, or at least when the system is finished, a user's permissions can be set so they can be restricted to only the tasks they need access to.

Over Christmas there have been a few other fixes and enhancements to various Labyrinth sites, so expect to see those to also find their way back into the core code and plugins. I expect several Labyrinth related releases this year, and hopefully a few more talks at YAPCs, Workshops and technical events in the coming year about them all. Labyrinth has been a fun project to work on, and long may it continue.

File Under: labyrinth / opensource / website
NO COMMENTS


To Wish Impossible Things

Posted on 4th May 2013

The QA Hackathon website has had a bit of an update today. Primarily a new page and new photos have been added, but plenty of other updates have been included too.

The new page is a review page, to collect various blog and news posts relating to each year's event. Originally I listed all the reviews from previous years in the side panel, but now that we've just had the 6th annual event, the list was looking a little bit too cramped.

With the extra space, I've also been able to include the group shots that were taken at some of the events. Unfortunately there was no group shot taken in Birmingham, and I've not seen any during the 2010 and 2011 events, so if there are any, please let me know. Also if there is one of the Tokyo Satellite event this year I would love to include it on the site.

I've added some write-ups to the last few events in the About page. The biggest change though is likely only visible to those with screen readers, as I've made many changes to links and images to provide more accessibility. Several fixes to layout, spelling and wording have also been included too.

The site, particularly the list of reviews, is still incomplete. If a blog entry is missing that you think should be there, or you spot other items that could do with an update, feel free to email me with details, or fork the repo on GitHub and send me a pull request.

File Under: hackathon / perl / qa / website
NO COMMENTS


Lost In The Echo

Posted on 26th August 2012

I've just released new versions of my use.perl distributions, WWW-UsePerl-Journal and WWW-UsePerl-Journal-Thread. As use.perl became decommisioned at the end of 2010, the distrubutions had been getting a lot of failure reports, as they used screen-scraping to get the content. As such, I had planned to put them out to pasture in BackPAN. That was until recently I discovered that Léon Brocard had not only released WWW-UsePerl-Server, but also provided a complete SQL archive of the use.perl database (see the POD for a link). Then combining the two, he put up a read-only version of the website.

While at YAPC::Europe this last week, I started tinkering, and fixing the URLs, regexes, logic and tests in my two distributions. Both distributions have had functionality removed, as the read-only site doesn't provide all the same features as the old dynamic site. The most obvious is that posting new journal entries is now disabled, but other lesser features not available are searching for comments based on thread id or users based on the user id. The majority of the main features are still there, and those that aren't I've used alternative methods to retrieve them where possible.

Although the distributions and modules are now working again, they're not perhaps as useful as they once were. As such, I will be looking to merge both distributions for a future release, and also providing support to a local database of the full archive from Léon.

Seeing as no-one else seems to have stepped forward and written similar modules for blogs.perl, I'm now thinking it might also be useful to take my use.perl modules and adapt them for blogs.perl. It might be a while before I finish them, but it'll be nice to have the ability to have many of the same features. I also note that blogs.perl.org also now has paging. Yeah \o/ :) This has been a feature that I have been wanting to see on the site since it started, so thanks to the guys for finding some tuits. There was a call at YAPC::Europe for people to help add even more functionality, so I look forward to seeing what delights we have in store next.

File Under: opensource / perl / website
NO COMMENTS


Know Your Rights

Posted on 26th May 2011

The changes required as part of the EU Privacy and Electronic Communications Directive, which I discussed last week, come into effect today (26th May 2011). The Information Commissioner's Office (ICO) released a press release on their website stating that "Organisations and businesses that run websites aimed at UK consumers are being given 12 months to 'get their houses in order'." However, this statement only serves to confuse the issue more. Does this mean that individuals are not covered by the law (the directive implies they are) or does it mean that the leniency given to businesses does not apply to individuals, and thus the full weight of the law and fines will be imposed immediately. The press release also seems to imply that the new law only applies to businesses providing ecommerce websites, so does that mean other businesses and organisations are exempt?

Or, does it mean that those implementing the law and writing press releases are so eager to get something out, they have forgotten that their peace offering to (some?) businesses still leaves a gaping hole in their policy of adhering to the original directive.

And it gets worse. Reading an article on eWeek, George Thompson, information security director at KPMG, is quoted as saying "The new law inadvertently makes the collection of consent - yet another set of sensitive, customer data - compulsory. Companies need to tighten up their data management policies and make absolutely sure that every new data composition is covered." Which leads me to believe that you can now be fined if you don't ask the user to accept cookies, and can be fined if you don't record details of those who said they don't want cookies! Then I assume you can then be fined again if that data isn't securely stored away to adhere to the Data Protection Act.

Did no-one really sit down and think of the implications of all this?

The Register reports that only 2 countries within the EU have notified the Commision that all the rulings have been passed into law, with the other Member States possibly facing infringement proceedings. With such a weight of resistence, wouldn't it be more wise to review the directive properly so all Member States understand and agree to all the implications?

It's not all doom and gloom though. Another article by Brian Clifton on Measuring Success, looks at Google Analytics, and concludes that "Google Analytics uses 1st party cookies to anonymously and in aggregate report on visits to your website. This is very much at the opposite end of the spectrum to who this law is targeting. For Google Analytics users, complying with the ToS (and not using the other techniques described above), there is no great issue here - you already respect your visitors privacy...!" (also read Brian's car counting analogy in comment 3, as well as other comments). In fact Google's own site about Google Analytics supports Brian's conclusion too.

The BBC have posted on their BBC Internet Blog, explaining how they are going to be changing to comply with the law. To begin with they have updated their list of cookies used across all their services. Interestingly they list Google Analytics as 3rd-party cookies, even though they are not, but I think that comes from the misunderstanding many of us had about GA cookies.

Although the ICO website has tried to lead by example, with a form at the top of their pages requesting you accept cookies, this doesn't suit all websites. This method of capturing consent works fine for those generating dynamic websites from self controlled applications, such as ICO's own ASP.NET application, but what about static websites? What about off-the-shelf packages that haven't any support for this sort of requirement?

On the other side of the coin, the ICO themselves have discovered that a cookie used to maintain session state is required by their own application. Providing these are anonymous, the directive would seem to imply that these cookies are exempt, as being "strictly necessary" for the runing of the site. Then again, if they did contain identifying data, but the application wouldn't work without it, is that still "strictly necessary"? A first step for most website owners will be to audit their use of cookies, as the BBC have done, but I wonder how many will view them all as strictly necessary?

It generally means this is going to be an ongoing headache for quite sometime, with ever more questions than answers. As some have noted, it is going to take a legal test case before we truly know what is and isn't acceptable. Here's hoping it goes before a judge well versed with how the internet works, and that common sense prevails.

File Under: internet / law / life / website
NO COMMENTS


The Sanity Assassin

Posted on 12th May 2011

An update to my recent post.

With thanks to a fellow Perler, Smylers informs me that a Flash Cookie refers to the cookie used by Flash content on a site, which saves state on the users machines, by-passing browsers preferences. Odd that the advice singles out this type of cookie by name though, and not the others.

In an article on the Wall Street Journal I found after posting my article, I found it interesting to discover that the ICO themselves use Google Analytics. So after 25th May, if you visit the ICO website and see no pop-up, I guess that means Google Analytics are good to go. Failing that they'll see a deluge of complaints that their own website fails to follow the EU directive.

I also recommend reading the StatCounter's response too. They also note the problem with the way hosting locations are (not) covered by the directive, and the fact that the protection from behavioural advertising has got lost along the way.

After a discussion about this at the Birmingham.pm Social meeting last night, we came to the considered opinion that this would likely just be a wait and see game. Until the ICO bring a test case to court, we really won't know how much impact this will have. Which brings us back to the motives for the directives. If you're going to take someone to court, only big business is worth fining. Bankrupting an individual or a small business (ICO now have powers to fine up to £500,000) is going to give the ICO, the government and the EU a lot of really negative press.

Having tackled the problem in the wrong way, those the directives sort to bring into line are only going to use other technologies to retrieve and store the data they want. It may even effect EU hoisting companies, if a sizeable portion of their market decide to register and host their websites in non-EU countries.

In the end the only losers will be EU businesses, and thus the EU economy. Did anyone seriously think these directives through?

File Under: government / law / security / technology / usability / web / website
NO COMMENTS


The Planner's Dream Goes Wrong

Posted on 11th May 2011

On May 26th 2011, UK websites must adhere to a EU directive regarding cookies, that still hasn't been finalised. Other member states of the EU are also required to have laws in place that enforce the directive.

Within the web developer world this has caused a considerable amount of confusion and annoyance, for a variety of reasons, and has enabled media outlets to scaremonger the doom and gloom that could befall developers, businesses and users. It wouldn't be so bad if there was a clear piece of legislation that could be read, understood and followed, but there isn't. Even the original EU directives are vague in the presentation of their requirements.

If you have the time and/or inclination the documents to read are Article 2 of Directive 2009/136/EC (the Directive), which amends the E-Privacy Directive 2002/58/EC (the E-Privacy Directive), with both part of the EU Electronic Communications Framework (ECF).

Aside from the ludicrous situation of trying to enforce a law with no actual documentation to abide by (George Orwell would have a field day), and questioning why we are paying polictians for this shambolic situation, I have to question the motives behind the creation of this directive.

The basic Data Protection premise for tightening up the directive is a reasonable one, however the way it has been presented is potentially detremental to the way developers, businesses and users, particularly in the EU, are going to browse and use the internet. The directive needed tightening due to the way advertisers use cookies to track users as they browse the web and target adverts. There has been much to complain about in this regard, and far beyond the use of cookies with companies such as Phorm trying to track information at the server level too. However, the directive has ended up being too vague and covers too wide a perspective to tackle the problem effectively.

Others have already questioned whether it could push users to use non-EU websites to do their business because they get put off using EU based sites. Continually being asked whether you want to have information stored in a cookie every time you visit a website is going to get pretty tiresome pretty quickly. You see, if you do not consent to the use of cookies, that information cannot be saved in a cookie, and so when revisiting the site, the site doesn't know you said no, and will ask you all over again. For those happy to save simple preferences and settings stored in cookies, then you'll be asked once and never again. If you need an example of how bad it could get, Paul Carpenter took a sartirical look at a possible implementation.

On Monday 9th May 2011, the Information Commissioner's Office (ICO) issued an advice notice to UK businesses and organisation on how to comply with the new law. However even their own advice states the document "is a starting point for getting compliant rather than a definitive guide." They even invent cookie types that don't exist! Apparently "Flash Cookies" is a commonly used term, except in the web technology world there are just two types of cookie, Persistent Cookies and Session Cookies. They even reference the website AllAboutCookies, which makes no mention of "Flash Cookies". Still not convinced this is a complete shambolic mess?

The directives currently state that only cookies that are "strictly necessary" to the consumer are exempt from the ruling. In most cases shopping carts have been used as an example of cookie usage which would be exempt. However, it doesn't exempt all 1st party cookies (those that come from the originating domain), and especially targets 3rd party cookies (from other domains). The advice states "The exception would not apply, for example, just because you have decided that your website is more attractive if you remember users' preferences or if you decide to use a cookie to collect statistical information about the use of your website." Both of which have significant disruption potential for both websites and their visitors.

Many of the 1st party cookies I use are Session Cookies, which either store an encrypted key to keep you logged into the site, or store preferences to hide/show elements of the site. You could argue both are strictly necessary or not depending on your view. Of the 3rd party cookies, like many people these days, I use Google Analytics to study the use of my websites. Of particular interest to me is how people find the site, and the search words used that brough the visitor to the site. It could be argued that these are strictly necessary to help allow the site visitor find the site in the first place. Okay its a weak argument, but the point remains that people use these types of analysis to improve their sites and make the visitor experience more worthwhile.

Understandly many people have questioned the implications of using Google Analytics, and on one Google forum thread, the Google approved answer seems to imply that it will only mean websites make it clearer that they use Google Analtyics. However this is at odds with the ICO advice, which says that that isn't enough to comply with the law.

If the ruling had been more explicit about consent for the storing of personal data in cookies, such as a name or e-mail address, or the use of cookies to create a personal profile, such as with advertisier tracking cookies, it would have been much more reasonable and obvious what is permissible. Instead it feels like the politicians are using a wrecking ball to take out a few bricks, but then aiming at the wrong wall.

For a site like CPAN Testers Reports, it is quite likely that I will have to block anyone using the site, unless they explictly allow me to use cookies. The current plan is to redirect people to the static site, which will have Google Analytics switched off, and has no other cookies to require consent. It also doesn't have the full dynamic driven content of the main site. In Germany, which already has much stricter requirements for data protection, several personal bloggers have choosen to not use Google Analytics at all in case they are prosecuted. I'm undecided at the moment whether I will remove GA from my websites, but will watch with interest whether other bloggers use pop-ups or remove GA from their sites.

Perhaps the most frustrating aspect of the directives and the advice is that it discusses only website compliance. It doesn't acknowledge that the websites and services may be hosted on servers outside the EU, although the organisation or domain may have been registered within the EU. It also doesn't differentiate between commercial businesses, voluntary organisations or individuals. Personal bloggers are just as at risk to prosecution as multinational, multibillion [currency of choice] businesses. The ICO is planning to issue a separate guidance on how they intend to enforce these Regulations, but no timescale is given. I hope that they make it absolutely clear that commercial businesses, voluntary organisations or individuals will all be treated differently from each other.

In their eagerness to appear to be doing something, the politicians, in their ignorance, have crafted a very misguided ruling that will largely fail to prevent the tracking of information and creation of personal profiles, which was the original intent of the changes. When companies, such as Phorm, can create all this personal information on their servers, using the same techology to capture the data, but sending it back to a server, rather than saving a cookie, have these directives actually protected us? By and large this will be a resounding No. Have they put in place a mission to disrupt EU business and web usage, and deter some from using EU based websites? Definitely. How much this truly affects web usage remains to be seen, but I suspect initially there will be an increase in pop-ups appearing on websites asking to use cookies.

It will also be interesting to see how many government websites adhere to the rulings too.

File Under: government / law / security / technology / usability / web / website
NO COMMENTS


Into The Blue

Posted on 7th May 2011

I haven't been posting recently about the Perl projects I'm currently working on, so over the next few posts I hope to remedy that.

To begin with, one of the major projects I've been involved with for the past 8 years has been CPAN Testers, although you can find out more of my work there on the CPAN Testers Blog. This year I've been releasing the code that runs some of the websites, specifically those that are based on my other major project, Labyrinth. Spearheading these releases have been the CPAN Testers Wiki and CPAN Testers Blog, with further releases for the Reports, Preferences and Admin sites also planned. The releases have taken time to put together mostly because of the major dependency they all have, which is Labyrinth.

Labyrinth is the website management framework I started writing back in 2002. Since then it has grown and become a stable platform on which to build websites. With both the CPAN Testers Wiki and the CPAN Testers Blog, three key plugins for Labyrinth have also been released which hopefully others can make use of.

The Wiki plugin, was intended to be written for the YAPC::Europe 2006 Wiki, but with pressures of organising the conference and setting up the main conference site (which also used Labyrinth), I didn't get it finished in time. Once a CPAN Testers Wiki was mooted, I began finishing off the plugin and integrating into Labyrinth. The plugin has been very stable for the last few years, and as a consequence was the first non-core plugin to be released. It's a fairly basic Wiki plugin, not too many bells and whistles, although there are a couple of Perlish shortcuts, but for the most part you don't need them. The CPAN Testers Wiki codebase release was also the first complete working site for Labyrinth, which was quite a milestone for me.

Following that success, the next release was for the CPAN Testers Blog. Again the underlying plugin, the Blog Plugin, has been stable for a few years, so was fairly quick to package and release, however the secondary plugin, the Event Plugin, has been evolving for quite some time and took a little more time. As I use both these plugin for several other sites, it was a good opportunity to bring together any minor bug fixes and layout changes. Some of these have seen slight modifications to the core Labyrinth codebase and the core set of plugins. In addition it has prompted me to start working on the documentation. It is still a long way from being complete, but at least the current documentation might provide some guidance to other users.

One of my major goals for Labyrinth was for it to be a 'website in a box'. Essentially this means that I wanted anyone to take a pre-packaged Labyrinth base (similar to the Demo site), drop it on a hosting service and be able to run a simple installation script to instantiate the database and configuration. The installation would then also be able to load requested plugins, and amend the database and configuration files appropriately. I haven't got to that stage yet, but it is still a goal.

With this goal in mind, I have read with interest the recent postings regarding the fact that DotCloud are now able to run Perl apps. This is definitely great news, and is exactly the kind of setup I had wanted to make best use of for the 'website in a box' idea. However, with several other frameworks now racing to have the coolest instance, it isn't something I'm going to concentrate on right now for Labyrinth. Plus there is the fact that Labyrinth isn't a PSGI framework, which others have eagerly added to their favourite framework. Labyrinth came from a very different mindset than other now more well known frameworks, and tries to solve some slightly different problems. With just me currently working on Labyrinth, as opposed to the teams of developers working on other frameworks, Labyrinth is never going to be the first choice for many reasons. I shall watch with interest the successes (and lessons learned from any hiccups) of the other frameworks as it is something I would like to get working with Labyrinth. If anyone who has the time and knows PGSI/Plack well enough, and would like to add those capabilities to Labyrinth, please get in touch.

The next notable plugins I'll be working on are the Survey, Music and Gallery Plugins. The former has its own post coming shortly. The next notable CPAN Testers site released planned is the Reports site. With it being considerably more involved, it might take a little longer to package and document, but it will likely be the most complex site release for Labyrinth, which will give anyone interested in the framework a good idea of how it can be used to drive several sites all at once.

File Under: labyrinth / opensource / perl / web / website
NO COMMENTS


Addicted to Chaos

Posted on 31st March 2011

Sometime ago, a website I was working on needed the ability to view images on the current page from a thumbnail. Many websites now feature this functionality, but at the time only a few seemed to offer this, and the assumption was that the javascript required was rather complex. As such, I did a search of the viewer libraries available, either as Open Source or for free download, that I could use for a commercial website.

The initial search revealed a rather more limited result than I expected, and seemed to imply that the complexity had put people off from developing such a library. However, in retrospect it seems that a market leader has become so popular, stable and robust, that others have choosen to provide different or limited presentations based on similar designs.

Back last year I began writing a review of some of the viewers, but never got around to finishing it. Having some time recently, I decided to both complete the review and revisit the viewers to see what improvements have been made since I first investigated them.

Before I begin the individual reviews, I should note the requirements I was looking for in a viewer. Firstly, the viewer needed to be self contained, both with files and directory structure, so that the feature could be added or removed with minimal changes to other website files. The viewer needed to be run completely on the client side, no AJAX or slow loading of large images would be acceptable. However, the most significant requirement was that all code needed to work in IE6. Unfortunately this latter requirement was non-negotiable.

I was quite surprised by the results of the solutions I could find around the web, and although there are likely to be others now, the following is a brief review of each of the four immediate solutions I found, and my experiences with them.

Lightbox

Possibly the best know thumbnail viewer library available, and now a clear market leader. The original review was with v2.04, which had been the stable release from 2008. This month (March 2011) has seen a version 2.05 release with added IE9 support. Lightbox is licensed under the Creative Commons Attribution 2.5 License, and is free to use for commercial projects, although a donation would be very much appreciated.

While this viewer works in most browsers, and the features of images sets and loading effects looked great, it proved unworkable in many of the IE6 browsers I tried across multiple platforms. Despite searching in forums and in some howtos, there didn't seem to be an obvious fix to the problem. The viewer would either not load at all, load with a black layer over the whole web page, or begin to load and crash the browser. I know there are many problems and faults with IE6 and the javascript rendering engine, but these were supposedly stable releases.

As Lightbox makes use of the Prototype Framework and Scriptaculous Effects Library, which was already being used within the website the viewer was for, the library initially seemed to be the best fit. Failing IE6 so dramatically and consistently, disappointingly meant it couldn't be pursued further.

Slimbox

Slimbox is a Lightbox clone written for the JQuery Javascript Library. v2.04 is the last stable release, and the release that was originally reviewed. Slimbox is free software released under MIT License.

Slimbox is based on Lightbox 2, but utilises more of the JQuery framework and is thus slightly less bulky. While working well in the browsers I tried, it flickered several times in IE6 when loading the image. Anyone viewing the effect with eplipsy might well have felt ill. Even for someone not affected by eplisey this strobing effect was extremely off putting. I suspect this problem may well be an alternative side-effect to those seen with the original Lightbox, but again forums and howtos didn't provide a suitable fix in order to remedy this problem.

Dynamic Drive Thumbnail Viewer

This is the first thumbnail viewer that Dynamic Drive have available, as the second is an inline viewer rather than an overlay, which is what I was after, and is the version made available on July 7th, 2008. Scripts by Dynamic Drive are made available under their Terms of Use, and are free to use for commercial projects.

This a very basic viewer, relying on basic functionality rather than flashy effects. As such, it is simple in design and presentation. Rather than create a full browser window overlay, as both Lightbox and Slimbox do, the Dynamic Drive viewer simply contains the viewing image within a simple DIV layer tag. There is the possibility to add visual effects, but these can be easily turned off.

This seemed to work in most of the browser tried, except when clicking the image in IE6. The image appeared, but then immediately a javascript error popped up. After quickly reviewing the configuration and turning off the animation, the viewer opened and worked seamlessly across all the browsers tested.

Highslide JS

Highslide JS is a very feature rich library, which provides much more than an image viewer. Highslide JS is licensed under a Creative Commons Attribution-NonCommercial 2.5 License, which means you are free to use the library for non-commercial projects. For commercial projects two payment methods are available, $29 for a single website, and $179 for unlimted use.

The feature set for displaying images includes the style of animation to open images, the positioning of text, and the linking of image sets. In addition, it also provides many features for regular content, which can then be used for tooltip type pop-ups, using embedded HTML, IFrames and AJAX. Another standard feature is the ability to allow the user to move the pop-up around the screen, to wherever might be convienent.

However, there is a downside. While this works well in most browsers, even just loading the Highslide JS website in IE6 throws up several errors. With the library being so feature rich, it is a considerably larger codebase, although removing comments can remove this down to just over 8KB, and I suspect some of the older browsers may not be able to handle some of the complexity. Their compatibility table suggests that it works all the way back to IE 5.5, but in the tests performed for IE6, when the site did open without crashing the browser, the viewer itself felt rather clunky when an image was opened and several of the visibility settings just didn't work. You also frequently get an 'Unterminated string constant' error pop-up, which just feels disconcerting considering they are asking you to pay for commercial usage.

If IE6 wasn't a factor, this may have been a contender, as the cost is very reasonable for a commercial project that would utilise all its features.

Conclusion

These are just the four viewers that were prominent in searches for a "thumbnail viewer". They all seem to have the same, or at least a similar, style of presentation of images, which is likely due to the limited way images can be displayed as an overlay. However, the basic functionality of displaying an image seems to have been overshadowed by how many cool shiny features some can fit into their library, with configuration seeming to be an after thought.

With the ease of configuration to disable the IE6 error, the basic functionality and the freedom to use for commercial projects, the Dynamic Drive solution was utimately chosen for the project I was working on. If IE6 wasn't a consideration, I would have gone with Lightbox, as we already use Prototype and Scriptaculous. With IE6 usage dwindling on the website in question (Jun 2010: 38.8%, down to Mar 2011: 13.2%), it is quite possible that we may upgrade to a more feature and effect rich viewer in the future, and Lightbox does seem to be a prime candidate.

Consider this post a point of reference, rather than a definitie suggestion of what image viewer library to use. There may be other choices that suit your needs better than these, but these four are worth initial consideration at the very least.

Browsers & Operating Systems

For reference these were the browsers I tried, and the respective operating systems. And yes, I did test IE6 on Linux, where it occasionally stood up better than the version on Windows! Though this may be due to the lack of ActiveX support.

  • IE6 (WinXP, Windows7, Linux)
  • IE7 (Windows7)
  • IE8 (Windows7)
  • Firefox 3.6 (WinXP, Windows7, Linux)
  • Opera 9.8 (Linux)
  • Opera 10.52 (Linux)
  • Chrome 5 (Windows7, Darwin)
  • Chromium 6 (Linux)
  • Safari 4 (Darwin, iOS)

File Under: opensource / review / technology / usability / web / website
NO COMMENTS


Long Live Rock'n'Roll

Posted on 9th February 2011

On the 1st January 2011, I released the first Open Source version of Labyrinth, both to CPAN and GitHub. In additon I also released several plugins and a demo site to highlight some of the basic functionality of the system.

Labyrinth has been in the making since December 2002, although the true beginnings are from about mid-2001. The codebase has evolved over the years as I've developed more and more websites, and got a better understanding exactly what I would want from a Website Management System. Labyrinth had the intention of being a website in a box, and although it's not quite there yet, hopefully once I've released all the plugin code I can put a proper installation tool in place.

Labyrinth now is the backend to several Open Source websites, CPAN Testers using it for the Reports, Blog, Wiki and Preferences sites, as well as some personal, commercial and community projects. As a consequence Labyrinth has become stable enough to look at growing the plugins, rather than the core code. I'm sure there is plenty that could be done with the core code, but for the moment providing a good set of plugins, and some example sites are my next aims.

As mentioned, I see Labyrinth as a Website Management System. While many similar applications and frameworks provide the scaffolding for a Content Management System, Labyrinth extends that by not only providing the ability to manage your content, but also to provide a degree of structure around the functionality of the site, so the management of users and groups, menu options and access, as well as notification mechanisms, enable you to provide more control dynamically.

When writing the fore-runner to Labyrinth, one aspect required was the ability to turn on and off functionality instantly, which meant much of the logic flow was described in the data, not the code. Labyrinth has built on this idea, so that the dispatch tables and general functionality can be controlled by the user via administration screens, and not by uploading new code. When I started looking at this sort of application back in 2001, there was nothing available that could do that. Today there are several frameworks written in Perl that potentially could be tailored to process a website in this way, but all require the developer to design and code the functionality. Labyrinth aims to provide that pre-packaged.

I'm primarily releasing Labyrinth so that I can release all the code that drives the CPAN Testers websites. Giving others the ability to better suggest improvements and contribute. The system allows me the freedom to build websites quickly and easily, with the hardwork being put into the design and CSS layouts. With so many other frameworks available, all of which have bigger development teams and support mechanisms than I can offer, I'm not intending Labyrinth to be a competitor. It might interest some, which is great, but if you prefer to work on other frameworks that's great too. After all it's still Perl ;)

More news of plugins and sites being released coming soon.

File Under: labyrinth / opensource / perl / website
NO COMMENTS


Long Time Gone

Posted on 4th May 2010

It has been quite a few months since I last posted here. Quite a few events and projects have happened and held my attention since I last wrote in my blog. And I still have a backlog of photos and videos from last year to get through too!

I did wonder whether anyone might think that after talking about Why The Lucky Stiff in one of my last posts, that I had done the same. Well for those who follow my CPAN Testers work, will know that CPAN Testers 2.0 has been a rather major project that finally got properly underway in December 2009. It's nearing completion, and I'll cover some of the highlights in a future post. Although it's been my most consuming project over the last 6 months or so, it hasn't been my only one. As mentioned in another of my last posts, I'm writing a book about how to host a YAPC. Due to other projects taking a higher priority, this has taken somewhat of a backseat for the time being, but I do plan on getting a second draft together within the next few months. I have looked into self-publishing the book and I'm now planning to have it formerly submitted with an ISBN (the internation book numbers) and supplied via print-on-demand print runs.

Another project that has been ongoing alongside my CPAN Testers work, has been my website management system, Labyrinth. This has been the website application I have been developing since 2002, and although several other Perl web frameworks have now been developed since, to lesser and greater degrees, Labyrinth has had the disadvantage of only having 1 core developer for the past 8 years. It's not an application that will revolutionise web development and deployment, but it has very successfully worked for a number of websites I have developed over the years. After having been relatively stable for the past year or two, I'm now cleaning up the code so I can properly release it as open source. This is mostly so that anyone wishing to contribute to CPAN Testers, or the YAPC Surveys, will then have all the code available to them. If anyone wants to use it and help develop it further, that would be a welcome bonus, but realistically other web frameworks have gained so much mindshare that I'm not expecting Labyrinth to make much of a dent any more. Not that that is a problem, as Labyrinth has made deploying websites so much easier for me, that I'll just be glad to let people help on CPAN Testers and the YAPC Surveys.

Speaking of the YAPC Surveys, YAPC::NA 2010 and YAPC::Europe 2010 are fast approaching. These will be next projects to get up and running. Thankfully the code base just needs a few upgrades to the latest version of Labyrinth, and some work on skinning the CSS to match the respective YAPC sites. All being well this should only take a few days. Then I'll be looking to release this version of the code base for anyone wishing to run similar surveys for themselves. I've already had one interested party contact me regarding a conference in October, so hopefully the code will be suitable, and only the questions need adapting. We shall see.

My other major project this year, also began back in December 2009. As some readers are well aware, I am an ex-roadie. From 1989-1994 I was a drum tech, lighting engineer and driver for Ark, one of the best Black Country bands ever. Not that I'm biased or anything ;) Last year the band got together for some rehearsals and planned a few reunion gigs. With interest gaining, an album was also planned. So this year, the band began recording and booking gigs. As a consequence the Ark Appreciation Pages desperately needed a makeover. I'll write more about what happened next in another post. Ark are back, and Mikey and I are delighted to be able to be involved with the band once again.

That's just a few of the projects that have taken up my time over the last 6-8 months. There are several others that I hope to post about, with family, time and work permitting. Expect to hear a little more from me than you have so far this year.

File Under: ark / book / conference / labyrinth / opensource / perl / website / yapc
NO COMMENTS


April Skies

Posted on 1st May 2009

For those that might not be aware, I got made redundant on 31st March (the day after the QA Hackathon had finished). Thankfully, I start a new job next week, so I've managed to land on my feet. However, this has meant that I've ended up having the whole of April off to do stuff. My plan was to work on some of the Open Source projects that I'm involved with to move them further along to where I wanted them to be. As it turned out two specific projects got my attention over the last 4 weeks, and I thought it worth giving a summary of what has been going on.

YAPC Conference Surveys

Since 2006, I've been running the conference surveys for YAPC::Europe. The results have been quite interesting and hopefully have help organisers improve the conferences each year. For 2009 I had already planned to run the survey for YAPC::Europe in Lisbon, but this year will also see YAPC::NA in Pittsburgh having a survey of their own.

The survey site for Copenhagen in 2008 added the ability to give feedback to Master Classes and talks. The Master Classes feedback was a little more involved, as I was able to get the attendee list, but the talks feedback was quite brief. As such, I wanted to try and expand on this aspect and generally improve the process of running the surveys. Part of this involved contacting Eric and BooK to see if ACT had an API I could use to automate some of the information. I was delighted to get an email back from Eric, who very quickly incorporated an API that I could use, to retrieve the necessary data to keep the survey site for a particular conference up to date, even during the conference.

With the API and updates done, it was time to focus on expanding the surveys and skinning the websites to match that of the now live conference sites. The latter was relatively easy, and only required a few minor edits to the CSS to get them to work with the survey site. The survey site now has 3 types of survey available, though only 2 are visible to anyone not taking a Master Class. Those that have taken one of the YAPC::Europe surveys will be aware I don't use logins, but a key code to access the survey. This has been extended so that it can now be used to access your portion of the survey website. This can now be automatically emailed to attendees before the conference, and during if they pay on the door, and will allow everyone to feedback on talks during the conference. On the last day of the conference the main survey will be put live, so you can then answer questions relating to your conference experience.

I'm hoping the slight change won't be too confusing, and that we'll see some ever greater returns for the main survey. Once it does go live, I'd be delighted to receive feedback on the survey site, so I can improve it for the future.

CPAN Testers Reports

Since taking over the CPAN Testers Reports site in June 2008, I have spent a great deal of time improving it's usability for users. However, it's come at a price. By using more and more Javascript to dynamically change the contents of the core pages, it's meant that I have received a number of complaints that the site doesn't work for those with Javascript disabled or who use a browser that doesn't implement Javascript. For this reason I had decided that I should create a dynamic site and static site. The problem with this is that the current system to create all the files takes several hours for each set of updates (currently about 16 hours per day). I needed a way to drive the site without worrying about how long everything was taking, but also add some form of prioritisation so that the more frequently requested pages would get updated more quickly than those rarely seen.

During April, JJ and I went along to the Milton Keynes Perl Mongers technical meeting. One of the talks was about memcached and it got me thinking as to whether I could use it for the Reports site. Discussing this with JJ on the way home, we threw a few ideas around and settled on a queuing system to decide what needed updating, and to better managed the current databases to add indexes to speed up some of the complex lookups. I was still planning to use caching, but as it turned out memcached wasn't really the right way forward.

The problem with caching is that when there is too much stuff in the cache, the older stuff gets dumped. But what if the oldest item to get dumped is extremely costly on the database, and although it might not get hit very often, it's frequent enough to be worth keeping in the cache permanently. It's possible this could be engineered with memcached if this was for a handful of pages, but for the Reports site it's true for quite a few pages. So I hit on a slightly different concept of caching. As the backend builder process is creating all these static files, part of the process involves grabbing the necessary data to display the basic page, with the reports then being read in via the now static Javascript file for that page. Before dropping all the information and going on to the next in the list, the backend can simply write the data to the database. The dynamic site can then simply grab that data and display the page pretty quickly, saving ALOT of database lookups. Add to the fact that the database tables have been made more accessible to each other, the connection overhead has also been reduced considerably.

The queuing system I've implemented is extremely simple. On grabbing the data from the cache, the dynamic site checks quickly to see if there is a more recent report in existence. If there is, then a entry is added to the queue, with a high weighting to indicate that a website user is actually interested in that data. Behind the scenes the regular update system simply adds an entry in the queue to indicate that a new entry is available, but at a low weighting. The backend builder process then looks to build the entries with the most and highest weightings and builds all the static files, both for the dynamic site and the static site, including all the RSS, YAML and JSON files. It seems to work well on the test system, but the live site will be where it really gets put through its paces.

So you could be forgiven in thinking that's it, the new site is ready to go. Well not quite. Another part of the plan had always been to redesign the website. Leon had designed the site based on the YUI layouts, and while it works for the most part, there are some pages which don't fit well in that style. It also has been pretty much the same kind of style since it was first launched, and I had been feeling for a while that it needed a lick of paint. Following Adam's blog post recently about the state of Perl websites, I decided that following the functional changes, the site would get a redesign. It's not perhaps as revolutionary as some would want, judging from some of the ideas for skins I've seen, but then the site just needs to look professional, not state of the art. I think I've managed that.

The work to fit all the pieces together and ensure all the templates are correct is still ongoing, but I'm hopeful that at some point during May, I'll be able to launch the new look websites on the world.

So that's what I've been up to. I had hoped to work on Maisha, my other CPAN distributions, the YAPC Conference Survey data, the videos from the QA Hackathon among several other things, but alas I've not been able to stop time. These two projects perhaps have the highest importance to the Perl community, so I'm glad I've been able to get on with them and get done what I have. It's unlikely I'll have this kind of time again to concentrate solely on Open Source/Perl for several years, which in some respects is a shame, as it would be so nice to be paid to do this as a day job :) So for now, sit tight, it's coming soon...

File Under: community / conference / opensource / perl / website
NO COMMENTS


Rockin' In The Free World

Posted on 26th January 2009

Earlier this month, a good friend of mine, Jono Bacon announced that we was starting to write a book about building communities. It's been a subject that has been discussed at length by many communities, many times over many years, and there is no one right answer to it. Some methods work in one context and don't in another. You see it all depends on the people, and specifically the personalities, who are part of the community and who you want to encourage (or discourage as the case may be) into joining, rather more than the project or common interest element itself.

Jono's book, titled Art Of Community, will be a look at how to build communities from different perspectives. He's getting several notable Open Source community members to help contribute their stories and it looks like it will be a really useful book for those starting a project, or user group to get some ideas of how to make it happen.

The hard part of starting any community, is promotion. Jono himself is taking note of this for the book's promotion too. You see the book itself has started a community of people who are early supporters of the book, and want to help make it a success. Part of making it a success is letting people know it exists. As Jono is already widely well know in technical communities (I've known him for about 8 years thanks to him starting WolvesLUG near me), he does have a head start. But it still needs people to talk about it, discuss it and eventually review it. I thought I'd write this blog post, partly to help promote the website that the book now has, but also make others aware that the book is being written.

I'm looking forward to reading the completed book, as apart from being a great read, I expect it to become a great source of reference for helping new communities promote themselves and florish.

Having started Birmingham Perl Mongers back in 2000, been a Perl community member, a member of the YEF Venue Committee and a major contributor to the CPAN Testers project, I've been very accutely aware how hard it can be to build a community. Though it should be noted that the building part isn't just about getting a project or user group off the ground, it's also about keeping it going, and encourage others to get involve and help the community thrive.

A good case in point is the CPAN Testers project. I first became a CPAN Tester back in 2004, and contributed several thousand reports for the Win32 platform. It was thanks to Leon presenting a BOF at 2003 YAPC::Europe in Paris, that I first became interested enough to join the volunteer effort. Shortly afterwards I started contributing to code for the smoke tools and the websites, creating the CPAN Testers Statistics website in the process. With the help of the Statistics site I was able to promote the project to other Perl programmers at YAPC events, by show how valuable the service the project provides is. Over the last few years the number of testers has grown, and the number of test reports submitted has gone from a about 100 per day to over 5,000s per day. In June 2008, Leon handed over the Reports website to me, as I was eager to improve the websites and make them more useful. Since then, I've had several developers help contribute patches and ideas to the project and it has been very encouraging to see the community driving the site forward. CPAN Testers now have their own server, a whole family of websites and a great tester community. In our case the community has built itself and mostly promoted itself from being a useful set of websites for developers. It'll be interesting to see if Jono pinpoints anything that we actually did do to build the project community and just never realised we were doing it.

I'm also interested in reading the book, as it is likely to have some useful references for a book project I'm currently working on. Although I don't plan on making it a hard copy book, it will be available online, and I hope to encourage contributions and improvements. My book doesn't have a working title as yet, but the subject matter is 'organising Open Source conferences', and will also have thoughts for workshops, hackathons and large technical meetings. The blue print for the project is based largely on my own experiences of organising The 2006 YAPC::Europe Perl Conference, but will hopefully include other thoughts and comments from conference organsiers for other Open Source events, such as the organsiers of LUGRadio Live, which Jono himself was significant instigator of. Like Art of Community, my project will also be available online under a Creative Commons license, and I'll be watching to see how the Art of Community community establishes itself and see whether there are any good ideas I could use too.

I look forward to finally reading the book, but in the meantime I'll just have to keep an eye on the Art of Community website updates.

File Under: community / opensource / people / website
NO COMMENTS


Harder, Better, Faster, Stronger

Posted on 23rd July 2008

I should really have expected that interest to my site would hit overload last night, but I had thought it would cope. Unfortunately it meant I was sitting on the box, watching when the load average got too high and shutting down processes. As a result I did some quick profiling of the code using the lovely Devel::DProf, and spotted a few calls that were compelely unncessary, both as function calls and database calls. So I've quickly reworked some of the requests, and on my test machine the requests are now being processed in roughly 0.8 seconds rather than 1.6 seconds. Result!

It often takes something like being popular for you to actually take a second look at the performance of your site. Thankfully in my case, the changes are relatively minor, and have made significant improvements. I shall now be taking a better look at a few other sites I run soon, as I'm sure there are similar quick hit improvements I can make.

File Under: labyrinth / website
NO COMMENTS


Killing The Bland

Posted on 3rd July 2008

For those that have suffered with accessing my sites over the past month, I have to apologise. In the first instance it appears that a TLS library wasn't playing well with XEN, the virtual server management system, which runs my server on a physical machine in Amsterdam. Having sorted that one out, with some library upgrades, I noticed that the perl applications running my sites were occasionally going into overload and failing to end, due to what appeared to be hanging I/O requests. I'm still not certain why, but have put some measures in place to reduce the I/O requests.

However, while investigating the server process and open files, etc, I noticed that there were rather a large number of cronjobs running, including several that had nothing to do with anything I'd set up manually. The most resource hogging ones appeared to be PHP related, both for php4 and php5. Seeing as I don't run any PHP on the server, I looked to getting rid of them. It took a while to trace, but it appears that the mod-php4 and mod-php5 had got installed with apache2, possibly following an upgrade I did recently, but the associated packages had all been installed too. I have now removed about 20 packages and several thousand files from the server, which has freed up a bit of disc space, but also stopped several cronjobs, thus reducing the load on the server. I'm hoping this solves the recent overload issues, but if it doesn't at least it might last a bit longer before keeling over!

I've been monitoring the box for the last couple of days and everything looks to be running fine again. If you suffer anything similar, I suggest looking in /etc/cron.d and related directories for things that you really don't expect to be running.

File Under: computers / website
NO COMMENTS


Lost Horizon

Posted on 8th April 2008

For the past few months I've been playing Lost Cities on a site called YourTurnMyTurn. I've got into it so much I'm considering upgrading to a VIP account so I can play even more. I've tried some of the other games on there, but none have kept my interest as much as Lost Cities.

I'm hoping that they introduce a few more two player games based on German style board games, as they've certainly scored a hit with me for this. Kahuna and Pick And Pack aren't bad, but for me personally I got a bit bored too quickly. If I was playing them with someone in person, then it would probably have been alright, but playing remote lost the momentum for me.

I managed to get to the Quarter Final in my latest torunament, but unfortunately was up against a good player who managed to just pip me in the points. I'm playing in another 3 and soon to start another, so fingers crossed I can get further in them.

I've been playing it so much I haven't been playing Xplorers (aka Settlers of Catan) on AsoBrain for a while. Might even start entering some tournaments in that too at some point .. if I ever get the time!

File Under: games / website
NO COMMENTS


Let Me In

Posted on 3rd April 2008

The problem with those that get high and mighty about username/password site logins, is that they often use examples where you really do want some degree of protection, not from yourself, but from others. Of the 16 Account Design Mistakes listed in Part 1 and Part 2 by Jared M. Spool, most include good ideas for developers, however, some use examples where the sites are quite right to be obscure.

Take #13 "Not Explaining If It's The Username or Password They Got Wrong", then proceeding to hold up Staples and American Express as the worst offenders. I'm sorry but if I have accounts with companies like that, then there is no way on earth I want them giving hints to crackers whether they got my username or password wrong. Those kinds of sites contain VERY sensitive personal information, not least of which is your credit card information. If Jared is that eager to share his financial information, I'm now wondering if he publishes it on his personal website. Could it be that perhaps the very security he ridicules actually protects him from identity theft?

Another is #16 "Requiring More Than One Element When Recovering Password", where a company requires some form of additional account information other than just your email address. Again this is a company that holds your credit information and by the sound of it some very personal information (such as my phone number). Does Jared post his personal phone number on his website? I doubt it as I assume he doesn't want all and sundry knowing it, thus exposing him to more identity theft.

Don't get me wrong, Jared does list some good thoughts about username/password site logins, but the context in which he uses to ridicule some sites and companies is grossly misplaced. The problem is that the author often thinks only in terms of making life easier for themselves, forgetting that you can also make it easy for those of a more malicious nature too. In all, or possibly nearly all, sites that I have a login for, the login is there to protect my account on the site from abuse. I know there are sites out there that only provide customisations with your login, but I don't use them. Even those that don't contain personal information, I would not want anyone to hack in to. If you're happy to make it easy for some one to login to your blog account and post spam, abusive or malicious content, then fine, make it easy. For the rest of us, we'd rather have some form of protection on the account that makes it a little harder for others to get through.

File Under: design / rant / security / usability / website
NO COMMENTS


Dancing with the Moonlit Knight

Posted on 21st February 2008

This week it seems eBay are changing their policies to a number of things, one of them being Feedback. My mother told me she had read about it in the paper, but seeing as I hadn't noticed anything in my inbox from them, and it wasn't obvious from any of the general announcements, I assumed that either the paper had put the wrong spin on it to generate "news" (typical of the paper in question), or my mother had misunderstood the actual news article. I suspected the paper to be at fault. However, after a quick search I found this blog post, which picked up on the feedback issue, and after a bit of digging through all the recent announcements, I finally found the announcement specific to feedback. Why they had to hide it away I don't know. With such a big change I would have expected to see this in a "news" or "update" box on the front page.

Anyway, the point of the feedback changes seems to be to protect buyers from poor sellers. They believe that "buyers will be more honest when they leave Feedback since they will not fear retaliatory negative Feedback." Sorry but I don't buy that. I've had several buyers who have failed to follow through and left me with a bill for the final value fees (FVFs) from eBay. eBay DO NOT make it easy to get those fees back. Thankfully, I've not been given bad feedback. I have also been caught out by bad sellers trying to sell conterfeit products, but having contacted both sellers in my case I was able to get a refund. Now admittedly not everyone may be as successful, and could quite easily be ripped by quite a considerable amount, but I do believe the negative feedback does have it's place. If there is ever any issue with retaliatory negative feedback, then there should be a mechanism where either party can alert eBay to the situation and for it to be handled more appropriately. From my experience eBay make it very difficult to contact them, and when you do try and contact them it falls on deaf ears.

eBay also state "When buyers receive negative Feedback, they reduce their activity in the marketplace, which, in turn, harms all sellers". Ever thought that sometimes there are buyers for whom that is a good thing? At the moment a seller has a difficult time to do anything about a bad buyer, and in some cases the only way to alert other sellers is by leaving reasonable negative feedback. How are eBay going to better protect the seller from continually bad buyers? Some sellers refuse to deal with anyone who has less than 100 points, and I can see that getting worse, as having to pay eBay what amounts to a fine for being an honest seller, is not good enough. And please don't tell me about their Unpaid Item system, as I was told my window of opportunity had passed (or words to that effect), after I had waited a couple of weeks, sending private emails and mails via eBay itself, after the end of the auction. Any experience of trying to deal with eBay themselves, for me personally, has never been a good experience. I always end up feeling that they are only interested in taking my money, never willing to sort things out when things go wrong.

Thankfully my actual auction experience with eBay has been good, and I've been very happy with both buyers and sellers in nearly all my transactions. I wouldn't stop using eBay because of these changes, but it will make me more wary of the feedback mechanism, both as a seller and a buyer, as I'm not sure the changes are favourable to anyone. Except maybe eBay themselves as it will mean less data storage.

I'm not convinced by some of the changes they propose, although some do have merit, but I shall wait and see what the outcome is for me. I may not sell high volumes, but if I find myself getting messed around because I'm not able to spot bad buyers, then I may find alternative places to sell my CDs and music memorabillia. If others follow suit then buyers have less choice and prices get higher, thus eBay wins more from FVFs. I think I see the pattern here. Or maybe I'm just cynical ;)

File Under: commerce / ebay / rant / website
NO COMMENTS


Fixing A Hole

Posted on 5th February 2008

I recently made some minor alterations to the site, most you shouldn't notice, and some that are part of the admin screens. However, one noticeable part that I've removed is the Digg links. I can't really say why I added them in the first place, apart from the fact it seemed like a good idea at the time and several other sites have them too. My site doesn't really get the high end traffic that other more prolific and structured writers get, so it seemed a bit daft keeping them there when no-one was ever likely to use them. I know a few read my thoughts via their favourite RSS feeds, so obviously that has been worthwhile adding to the site, but Digg, well at least I know how it works if I'm ever asked to add to another site ;)

File Under: labyrinth / website
NO COMMENTS


Hello, Hello, Hello, Hello, Hello (Petrol)

Posted on 16th January 2008

Continuing the motoring theme, I recently discovered a newish site, PetrolPrices.com. It follows the comparison site idea, but for petrol (and other fuel types) at petrol stations and supermarkets, etc. across the UK. Although it hasn't been going for long, it does seem to have gained quite a bit of interest and is a worthy addition to your bookmarks.

The only annoying thing (for me persosnally) is that I had the idea to do this (as I'm sure several other did) several years ago when I first started developing websites :) However, I am surprised that no-one has developed this kind of site before, as it is a resource that has been wanted for a very long time. In my case I didn't develop my idea further, as I didn't know of a reliable way of getting at realtime data. Originally I was going to allow users to enter prices from their local filling station, but this is open to abuse, and considering that I wouldn't be able to verify the prices, would likely have been very unreliable. PetrolPrices.com have overcome this hurdle, thanks to Catalist. Although they existed back in 1998, my research never revealed them and I'm sure others had the same problem. With data for over 10,000 stations up and down the country they certainly have that data market sewn up.

I shall be using PetrolPrices.com from now on, but from initial searches it would seem that the price I'm paying currently is quite low. Although there are petrol stations that charge a lower price, the effort getting to/from them would probably negate a lot of the benefits of filling up at stations that are actually on my route to/from work. However, I see it being very useful when driving long distances, particularly when on holiday to find the cheapest and closest filling station.

File Under: cars / driving / website
NO COMMENTS


Who Are You

Posted on 20th July 2007

So I've been banned from Facebook.

They claim I can't use a fake name, but have failed to appreciate that they are a social networking site. In addition what is a "fake name". Barbie is my pseudonym, I've used it for over 20 years in both my careers in the music industry and the IT industry. Using a combination of 'perl', 'barbie' or 'birmingham' will bring up pages of me on Google. Most people in the companies I've worked for in the last 15 years have all referenced me as Barbie, including most CEOs, Managing Directors and board directors. Some have never been introduce to me with my birth name.

I find it a bit odd that a social site would try and impose their way of thinking onto anyone who doesn't fit their idea of who everyone should represent themselves online. I do understand that they might want to retain my birth name should they need to take any legal action for something I may write on their site, but I do not want my birth name to appear publically, just because they feel that everybody who uses their site must give up areas of their privacy.

I've emailed them to explain that Barbie is a true identity, and legally I am entitled to sign cheques and the like as Barbie. It is my professional pseudonym and for my last 3 jobs it has been stated from the outset that I am Barbie in the interview.

However, Mark has highlighted another issue with their system, that affects those that have names made up because either us westerners can't pronounce their true names or their language characters are not something westerners know how to pronounce. Are they going to be banned too?

The email I received stated that I have to provide a full first name (no initial) and a full last name. I can have a nickname providing it is derived from those one or both of those two fields. Why? I honestly fail to understand the logic of that. Many people I know have nicknames that are completely unrelated to their birth name and I find it difficult to understand why a social website wants to insist on what I call myself.

Are they a secret government site with covert reasons for knowing everybody's birth names? Somehow I don't think so. Do they have ideas far beyond what the rest of the world expects of them, quite probably. Will they reinstate me, probably not.

I hope they learn to understand their audience and not impose such silly restrictions on something that is essentially about connecting with friends and colleagues. They all know who I am, and I'm pretty sure every single one would vouch for me. Pity then that the people at Facebook have some draconian rule that they feel they need to enforce on those of us who don't fit their profile.

File Under: facebook / rant / web / website
3 COMMENTS


Welcome To The Monkey House

Posted on 14th April 2007

Why Grango? That and how do you pronounce it have been asked every so often. I'll answer the latter first. It's pronounced "Gran" as in 'granny' or 'Gran Turismo', and "go" as in "to go somewhere".

The initial question begins several years ago. When DanDan was 2, he started making references to something he called The Grango. We had no idea what he meant, but he kept trying to tell us about it. We eventually got the gist that he was trying to describe the monster in a few nightmares he'd had. It was an odd description and because he didn't know enough words at the time, he struggled to give any clues as to what the creature was. Then a while later we happen to watch a nature programme on the TV and he pointed out The Grango. It was an Orang-utan. I don't think he did a bad job trying to say what it was all things considered.

When it came round to sorting out a new domain name for some server space I wanted, it seemed an obvious choice. It turns out others had thought of the name for other reasons, but the .org domain was available. In time I'll probably give the domain to DanDan, but for now it's proved handy for my Open Source work.

Another obvious choice was the title for this piece. Remember Animal Magnet? No, well it seems all the comments for this clip all remember hearing being played most nights at Edwards No.8 in Brum :)

File Under: dandan / labyrinth / website
NO COMMENTS


Out Of Nowhere

Posted on 27th March 2007

It's always nice to get a project finished. I've been planning to rewrite my personal site for over a year, and finally this is the result. There's still plenty to add and improve, but at least it's finally replaced that 1998 version!

File Under: labyrinth / website
NO COMMENTS


Where's My Thing?

Posted on 25th March 2007

So here is where it begins. Not sure where this going to go, but then nobody really knows where anything goes.

I had thought to post my non-techie stuff here, but I'll probably post whatever comes into my head, so apologies for that! Expect random musings, rants and general points of interest, together with the occasional decent photo, that I manage to take as I slowly learn how to use a camera again. I used to take tons of concert photos when I was on tour, but trying to do the same with a digital camera is proving rather difficult. Still, it's fun :)

File Under: labyrinth / website
NO COMMENTS


Some Rights Reserved Unless otherwise expressly stated, all original material of whatever nature created by Barbie and included in the Memories Of A Roadie website and any related pages, including the website's archives, is licensed under a Creative Commons by Attribution Non-Commercial License. If you wish to use material for commercial puposes, please contact me for further assistance regarding commercial licensing.