Avoid features vs abandoning them

I stumbled on an article from Matt Galligan about the initial versions of Cir.ca, a news aggregator app from a few years ago that recently closed up.

So the article talked about many of the design ideas they had, and the struggles they worked through before changing or abandoning some ideas.  This is an example

Screen Shot 2016-06-11 at 4.30.04 PM

I don’t think it’s just me, but certainly types of people *like* me, that would immediately look at that and say “it can’t realistically be done”.  There’s too many variations of image sizings/croppings/ratios that would prevent decent text overlays to make this anything near automatable.  Without automation, this means every item would need human input/work to get that visual aesthetic ‘just right’.

I’ve pointed this one out not in hindsight, but in ‘present sight’ in many projects – specifically, tying interfaces to designs that were only tested with 2 images and 3 lines of content, but that are intended to support unlimited amounts of images/content – all the use cases are unable to be defined in a single photoshop file.  As the person who has to actually make it all work, you have to be able to account for all input values – headlines won’t always be 24 characters.  How do you deal with a 70 char headline?  How do you deal with user-generated photos with non-standard aspect ratios?  These questions have to be addressed, and often the best solution is extreme limiting of the input variables if ‘design’ is the primary concern.

And yet, I’ve been on multiple projects where a design like this gets approved (often by many folks) before ever letting someone who will have to implement it be involved.  When the reality hits…. the pushback on the dev/engineer is often “just make it work”.  Or “quit being negative”, or what have you.  I can only imagine how much time/money was lost/wasted on this particular issue, but also how often this *exact* problem has been played out/repeated over hundreds or thousands of app startups over the last few years.  Each team beating their head against the wall to try to implement the ‘vision’ collectively wasted thousands of hours and dollars.

In many cases, it’s better (in terms of getting to market, hitting deadlines, reducing time/money waste, etc) to avoid working on features up front vs having to abandon them later.  Convincing others of this, especially after decisions have already been made, is often a difficult task.

While certainly was not glad to see cir.ca close up, I was grateful to see Matt’s notes here.  I’m curious if anyone will actually pay heed to some of the lessons in this particular presentation and learn from them, saving themselves loads of time/money/headache.  My cynical nature expects not, because people always think their project/team/vision is more unique/special than it really is, and they’ll ‘get it right’ where others failed.

This all came about by perusing “/r/shutdown” subreddit earlier today.

Somewhat of an aside, but Matt worked with Joe Stump on SimpleGeo years back, and Joe is someone I knew from the early 2000s in Michigan before he moved on to greener pastures.  Matt also worked with Arsenio Santos at Cir.ca, and Arsenio was one of the better dev managers I’ve worked for over the years.  So while I’ve never met Matt directly, he’s a reminder to me a the increasingly small world we live in.

 

 

 

 

Visitors getting the wrong website content

I just recently had an exchange with a client that some of their web sites were ‘cross bleeding’ – a few visitors going to sitea.com were getting siteb.com.  These are all hosted on the same server, stock Apache/PHP stuff, nothing special.  However, one of the domains was SSL, and had an entry for responding to port 443, which the others didn’t.  If anyone tried sitea.com via https, they’d see siteb.  I went in and patched that up in the config and thought everything was good.

Week later… “this is still happening, and getting worse”.

Well… I started to dig in a bit more, and noticed that the control panel we’d installed had created the new siteb entry with IPv6 and IPv4, but the other sites, which got imported but not updated, were IPv4 only.  I updated all the vhost definitions to respond to IPv6 and IPv4, and… so far so good.  I’m pretty sure this was the issue.  It was only affecting a handful of people, and they were reporting in in a meeting (based on the context, I infer these were mobile users) and from what I gather, more mobile operators are pushing IPv6 out to end users.

If you’re getting the problem I was, check that you are dealing with both IPv6 and IPv4.

Getting console.log from phantomjs with selenese runner

Pretty much titled this what I was looking for this morning… :)

 

time java -jar bin/selenese-runner.jar -t 10000 --baseurl http://yourURL \
--phantomjs bin/phantomjs  \
--cli-args --webdriver-logfile=tests/browser.log \
--cli-args --webdriver-loglevel=debug \
--width 1280 --height 1024 \
--set-speed 130 --screenshot-dir public/tests/screenshots \
--screenshot-on-fail public/tests/screenshots \
--html-result public/tests \
tests/selenium/suite*.html

Note the “–cli-args” flags.  Yes, you can have multiple, and yes… this is documented on the selenese runner page, but it wasn’t obvious to me, so I’m posting it here anyway.

letsencrypt live and functioning in virtualmin

I’ve been a virtualmin/webmin user for many years.  Last summer, I heard about “let’s encrypt“, a service to offer free short-term SSL certificates to everyone.  A colleague tested the service in December, and I started messing around with it based on his recommendation.  I found a few shell scripts that would help automate the setup for getting SSL certs for the domains I manage, and started testing those.  A bit cumbersome, but much easier than the traditional SSL cert dance from other vendors.  As a side note, I noticed that out of the box support was better for Ubuntu than CentOS.

Anyway, I’ve been searching around for ways to automate this within my virtualmin systems, and found a vague reference in someone’s post that webmin understood about let’s encrypt.  I loaded up my virtualmin, and found that… goodness me, in the virtualmin area for each domain’s “manage SSL certificates”, there was a “let’s encrypt” tab ready to go.  I needed to install the letsencrypt code, and I symlinked it to a ‘letsencrypt’ name on my path, and… virtualmin did the rest.  It was literally just pressing the “request a certificate” and waiting < 20 seconds.

Screen Shot 2016-01-31 at 12.32.53 PM

Another aside – you can fairly easily support multiple SSL certs on one shared IP address with any modern stack and browsers.  This has been a supported ‘thing’ since… 2007, IIRC.  However, it relies on newer networking stacks, and from what I understand, Windows XP will never support this new security handling.  So… if you need to support Windows XP browsers/clients, you may need to stick with the “one ip per SSL cert” approach.  For rather run-of-the-mill consumer-oriented sites, aimed at mostly users with modern stacks, there’s little reason to not be using at least letsencrypt as a free SSL to encrypt your traffic.

WordPress security woes and plan of attack

I’ve been involved in a few wordpress security snafus over the last 3-4 months – almost none of which were my doing directly, but I’ve still gotten involved anyway.  I’ve been disappointed, but not surprised, that even some commercial security and scanning services seem to miss rather obvious issues, and this sours me even more on the entire idea of using those commercial services in the first place.  A friend found the ‘social.png‘ issue on a server, and had scanned with maldet, clamav, bitdefender, and … I think.. sitelock.com service (not 100% sure on that one).  All of them failed to notice that a .png file had “eval(‘foo’)” PHP code in it.

To that end, I’m putting some restrictions/requirements on new wordpress projects that I get involved with:

  • fail2ban has to be installed and running
  • maldet/clamav (they have found some issues in the past)
  • all files and directories are not writeable – small shell script will make them writeable on demand for a few minutes, then revert all files/directories back to unwriteable shortly thereafter
  • blocking all outbound port 80 and 443 traffic via iptables, with a specific whitelist of exceptions.  I can’t think of but a handful of reasons why PHP code needs to initiate unrestricted outbound traffic (maybe I’m wrong?)

 

I’m picking on wordpress mostly because it’s the cleanup I’ve had to wrestle with the last few months, but there’s little reason that these don’t really apply to any web projects, really.  The one that came up this week is on a managed server (“you can’t have root because you might do something to compromise security… but go ahead and install wordpress and do whatever you want”), and they called out and said “hey, you’re infected”.  but… as a managed service that I don’t even have shell access to, doesn’t the managed server company bear some responsibility for preventing these sorts of situations in the first place?  At >$500/month, I expected better service (wasn’t my client, wasn’t my hosting company choice, I’m just now being looped in because of the exploits).

There’s 2 main issues at play:

1.  bad code allows PHP code to be written in to world-accessible URLs to be executed

2.  the executed code can then talk to other servers on the internet, typically over ports 80 or 443

Stopping public folders from being writeable and stopping unrestricted outbound traffic both seem to go a long way to preventing these two issues.

Am I missing something?  Don’t say “go get wordfence” or something similar.  Well, you can say it, but… that is really only addressing a subset of potential issues.  I wouldn’t say no to something like wordfence on top of these other steps, but .. that doesn’t address a joomla project, or drupal projects, or whatever.

quick web idea generator

Running low on ideas? At our local wordpress helpdesk meeting this morning, someone mentioned his business venture, and it turned out he didn’t have the domain yet (but had already incorporated it in to his web copy, strangely). Fortunately, it was still available, and has since been secured, but it was quite a simple name (started with the word ‘triangle’, as that’s our regional nickname).

I did a quick search at leandomainsearch.com for ‘triangle’, then filtered to only domains starting with ‘triangle’, and was surprised at how many names were available. Of course, a domain name does not a business make, but you may get some neat ideas just by playing around.

Some links to regionality-based domain names that might spark some ideas in you :)

http://www.leandomainsearch.com/search?q=triangle
http://www.leandomainsearch.com/search?q=michigan
http://www.leandomainsearch.com/search?q=ohio
http://www.leandomainsearch.com/search?q=triad
http://www.leandomainsearch.com/search?q=hoosier
http://www.leandomainsearch.com/search?q=cajun

I’m sure you can likely think of regional nicknames for your neck of the woods as needed!

Grails ckeditor full settings

Took me a while to find this, and I’m posting here primarily for my own memory, but hopefully this useful to some of you as well.

 <ckeditor:config var="toolbar_Custom">
[
{ name: 'document',
items : [ 'Source','-','BulletedList', 'Link', 'Image', 'Font', 'FontStyle'] },
{ name: 'document', groups: [ 'mode', 'document', 'doctools' ], items: [ 'Source', '-', 'Preview', '-', 'Templates' ] },
{ name: 'clipboard', groups: [ 'clipboard', 'undo' ], items: [ 'Cut', 'Copy', 'Paste', 'PasteText', 'PasteFromWord', '-', 'Undo', 'Redo' ] },
{ name: 'links', items: [ 'Link', 'Unlink', 'Anchor' ] },
{ name: 'editing', groups: [ 'find', 'selection', 'spellchecker' ], items: [ 'Find', 'Replace', '-', 'SelectAll', '-', 'Scayt' ] },
{ name: 'forms', items: [ 'Form', 'Checkbox', 'Radio', 'TextField', 'Textarea', 'Select', 'Button', 'ImageButton' ] },
'/',
{ name: 'basicstyles', groups: [ 'basicstyles', 'cleanup' ], items: [ 'Bold', 'Italic', 'Underline', 'Strike', 'Subscript', 'Superscript', '-', 'RemoveFormat' ] },
{ name: 'paragraph', groups: [ 'list', 'indent', 'blocks', 'align', 'bidi' ], items: [ 'NumberedList', 'BulletedList', '-', 'Outdent', 'Indent', '-', 'Blockquote', 'CreateDiv', '-', 'Justify
{ name: 'insert', items: [ 'Image', 'Table', 'HorizontalRule', 'Smiley', 'SpecialChar', 'PageBreak', 'Iframe' ] },
'/',
{ name: 'styles', items: [ 'Styles', 'Format', 'Font', 'FontSize' ] },
{ name: 'colors', items: [ 'TextColor', 'BGColor' ] },
{ name: 'tools', items: [ 'Maximize', 'ShowBlocks' ] },
{ name: 'others', items: [ '-' ] },
{ name: 'about', items: [ 'About' ] }
]
 </ckeditor:config>

You can then reference the custom toolbar as such

<ckeditor:editor name="body" toolbar="Custom">${content?.body}</ckeditor:editor>

Comment out any sections in the toolbar config tags which you don’t want to show up in the CKEditor. This is current as of version 4.4.1 (I think). If/when CKEditor adds more functionality or renames things, this might get out of sync, but it works as of today.

SunshinePHP 2015 thoughts

Just got back from SunshinePHP 2015 – one of the better tech conferences I’ve been to in a while.  Lots of familiar faces (Cal, Michaelangelo, MWOP, Clark etc), but also a number of new people I’d never met before (hello James, Marian, David, Larry and others!)  Some people I’ve known by name I got to finally meet (hello Paul Jones and Lorna Jane) and of course, some great sessions.

*Probably* one of the best sessions for me was one of the ‘uncon’ sessions – Paul Jones’ “Action Domain Responder” talk.  ADR is something Paul’s been passionate about for a while, and I see why.  I’m not 100% convinced the concepts will take over any time soon, but I really understand the motivation a lot better (may try to blog on this a bit more in the near future, or perhaps get Paul on webdevradio.com to dive deeper).

Also had fun getting to meet and know Larry Kane, who attended the ‘freelance’ uncon session with me.  His passion and excitement for freelancing development really energized the room (and me) to push some things forward.

Daniel Cousineau’s presentation on beanstalkd was informative – I’ve heard about it but had never investigated enough – definitely small and lightweight enough to consider for the next project.

As with most conferences, there are some time slots where there’s too many good choices, and I know whichever I chose, I missed out on something else equally as good.  I actually missed the entire last slot because I ended up chatting with Daniel, and getting a whole new perspective on development that I’d …. well, I may have had it at some point, but had lost over the years.  Sometimes some of the best moments are in the social times at conferences, not the sessions directly (which was also part of the point of Cal’s closing remarks too – all about community).

Decompressing after my trip, but wanted to thank everyone at SunshinePHP (Adam, the speakers, fellow attendees, etc) for a great conference :)

purpose of framework benchmarking speed

I’ve followed the techempower benchmarks, and every now and then I check out benchmarks of various projects (usually PHP) to see what the relative state of things are. Inevitably, someone points out that “these aren’t testing anything ‘real world’ – they’re useless!”. Usually it’s from someone who’s favorite framework has ‘lost’. I used to think along the same lines; namely that “hello world” benchmarks don’t measure anything useful. I don’t hold quite the same position anymore, and I’ll explain why.

The purpose of a framework is to provide convenience, structure, guidance and hopefully some ‘best practices’ for working with the language and problem set you’re involved with. The convenience and structure come in the way of helper libraries designed to work a certain way together. In the form of code, these have a certain execution cost. What a basic “hello world” benchmark is measuring is the cost of at least some of that overhead.

What those benchmark results are telling you is “this is about the fastest this framework’s request cycle can be invoked while doing essentially nothing”. If a request cycle to do ‘hello world’ is, say, 12ms on hardware X, it will *never* be any faster than 12ms. Every single request you put through that framework will be 12ms *or slower*. Adding in cache lookups, database calls, disk access, computation, etc – those are things your application will need to do regardless of what supporting framework you’re building in (or not), but the baseline fastest performance framework X will ever achieve is 12ms.

These benchmarks are largely about establishing that baseline expectation of performance. I’d say that they’re not always necessarily presented that way, but this is largely the fault of the readers. I used to get a lot more caught up in “but framework X is ‘better'” discussions, because I was still reading them as a qualitative judgement.

But why does a baseline matter?  A standard response to slow frameworks is “they save developer time, and hardware is cheap, just get more hardware”.  Well… it’s not always that simple.  Unless you’re developing from day one to be scalable (abstracted data store instead of file system, centralized sessions vs on disk, etc), you’ll have some retooling to do.  Arguably this is a cost you’ll have to do anyway, but if you’re using a framework which has a very low baseline, you may not hit that wall for some time.  Secondly, ‘more hardware’ doesn’t really make anything go faster – it just allows you to handle more things at the same speed.  More hardware will never make anything *faster*.

“Yeah yeah yeah, but so what?”  Google uses site speed in its ranking algorithm.  What the magic formula is, no one outside Google will ever know for sure, but sites that are slower to your competitors *may* have a slight disadvantage.  Additionally, as mobile usage grows, more systems are SOA/REST based – much of your traffic will be responding to smaller calls for blobs of data.  Each request may not be huge, but they’ll need to respond quickly to give a good experience on mobile devices.  200ms response times will likely hurt you, even in the short term, as users just move to other apps, especially in the consumer space.  Business app users might be a bit more forgiving if they have to use your system for business reasons, sort of like how legions of people were stuck using IE6 for one legacy HR app.  They’ll use it, but they’ll know there are better experiences out there.

To repeat from above, throwing more hardware at the problem will never make things *faster*, so if you’ve got a slower site that needs to be measurably faster, you’ve possibly got some rearchitecting to do.  Throw some caching in, and you may get somewhat better results, but at some point, some major code modifications may be in order, and the framework that got you as far as it did may have to be abandoned for something more performant (hand rolled libraries, different language, whatever).

Of course, there’s always a maintainability aspect – I don’t recommend PHP devs throw everything away and recode their websites in C.  While this might be the most performant, it might take years to do, vs some other framework or even a different language.  I’ve incorporated Java web stacks in to my tool belt, and have some projects in Java as well as some PHP ones.  I benchmarked a simple ‘hello world’ in laravel 4, zf2 and java just this morning.  On the same hardware, the java stack was about 3-4 times faster (yes, APC cache was on).  Does this mean that all java apps are 4 times faster than PHP apps?   This was on PHP 5.4.34 – I’m interested in trying out PHP 7 soon to see what the improvements will be overall.

Grails configuration in views

I don’t know why it’s taken me this long to figure this out, but… injecting the Grails configuration object in to the view layer is pretty simple.

In a Grails filter, make an ‘after’ handler like this:

after = { Map model ->
model.config = grailsApplication.config
}

 

That’s pretty much it.  In your views, you can access ${config} directly.

This *seems* to be safe.  Are there any downsides to this approach?