quinta-feira, 8 de janeiro de 2015

jQuery UI Draggable / Resizable with constrain and CSS scale transformation

There are a bunch of questions and solutions to the problems between the interaction of these technologies, these are the better solutions that I found:
  1. http://stackoverflow.com/questions/10212683/jquery-drag-resize-with-css-transform-scale
  2. https://gungfoo.wordpress.com/2013/02/15/jquery-ui-resizabledraggable-with-transform-scale-set/
  3. http://stackoverflow.com/questions/17098464/jquery-ui-draggable-css-transform-causes-jumping
Although they make an improvement, they aren't totally accurate. What they basically do is a calculation of the right position / size that the object must have after the interaction.

You can experiment with the example below, drag the small square to right. Your mouse will be outside the parent div before you hit the edge.

And now, the same example, with the solution presented in [3].

Nice, now the small square moves with the mouse, but it go beyond the parent when dragging or resizing and this was the tricky part to solve. I spend a day trying to figure out how I could solve this, so I'm sharing my solution.

First let's solve the draggable problem. The "bug" (jQuery UI guys don't want to address it, so it's not a bug) occurs because inside jQuery UI it's use absolute event positions.

Think about the parent without the scale, it will be bigger, right? So it's size to jQuery UI is beyond the limit of the scaled down version. What we need to do is inform jQuery UI that our representation is smaller.

I tried in many ways not monkey patch jQuery UI, but these efforts were fruitless. I had to expose the "contaiment" var in the ui parameter passed to callbacks. With this little modification I could use the start and stop callbacks to make jQuery UI work with my scaled containment sizes.

var dragFix, startFix, stopFix;

window.myApp = {
  layout: {
    zoomScale: 1
  draggable: {
    _uiHash: function() {
      return {
        helper: this.helper,
        position: this.position,
        originalPosition: this.originalPosition,
        offset: this.positionAbs,
        containment: this.containment

$.ui.draggable.prototype._uiHash = myApp.draggable._uiHash;

startFix = function(event, ui) {
  ui.containment[2] *= myApp.layout.zoomScale;
  return ui.containment[3] *= myApp.layout.zoomScale;

stopFix = function(event, ui) {
  ui.containment[2] /= myApp.layout.zoomScale;
  return ui.containment[3] /= myApp.layout.zoomScale;

dragFix = function(event, ui) {
  var deltaX, deltaY;
  deltaX = ui.position.left - ui.originalPosition.left;
  deltaY = ui.position.top - ui.originalPosition.top;
  ui.position.left = ui.originalPosition.left + deltaX / myApp.layout.zoomScale;
  return ui.position.top = ui.originalPosition.top + deltaY / myApp.layout.zoomScale;

Nice and clean, don't you think? After solving this, I guess that making resizable works would be easy. The inner workings of this can't be very different, right? Wrong, dead wrong. What I think will be solved in 5 minutes, take the hole day.

The resizable code is very different. I expected to see the same algorithms to apply movement constraints and other operations. This way, I had to find a new way to inform jQuery UI about my constraints.

After a day tinkering with resizable code trying to find a solution that need minimal changes to jQuery UI, like with the draggable code, I was unable to find a solution that I liked.

First, I realized that I need to change the methods e, w, n and s in _change to get correct widths and heights according to my zoom scale. This was something that could be done in the "resize" callback, but in this case it's too late in the algorithm, the scaled position is needed by internal methods, before we get a chance to change it.

After this I thought that I had ended, but resizing an element that isn't in the position 0, 0 make it grow beyond the edge of the parent. Digging a bit more, I found that I need some way to change the "woset" and "hoset" calculation, but I don't find anyway to do this without monkey patching the entire method.

The final solution is this:

Maybe you are asking yourself why I'm monkey patching. This is because I'm use Rails and I want to have the benefits of the asset pipeline.

I'm not very proud of it, so I would love to know better ways to accomplish my final result in a simple manner. If you know any, please share!

sábado, 3 de janeiro de 2015

New kid on the S3 direct upload block

Sometime ago I saw the post [Small Bites] Direct Upload para S3: a Solução Definitiva! do Akita. Ow, fantastic! I was just playing with S3 Direct Uploads solutions that are described around the web and that could save me lots of time and give me a well polished solution.

I start playing with refile and how I could integrate it with my application. It don't take much time to became disappointed.

I already use paperclip in my application, it's very well integrated, meets my requirements and I'd like how it organize files, so adapt what I have to refile would be a pain, but it opened my head that S3 Direct Upload can be simpler, so I decide to experiment.

I'd like how s3_direct_upload interact with S3, keeping the filename. This way I decide to make a hybrid of s3_direct_upload and refile: s3-upnow has born.

The idea of the gem is to be backend agnostic. For now it works only with paperclip, so if it's your upload gem, give a try to s3-upnow and your feedback. I guess that support others upload gems is not difficult.

I know that it's not well polished, but it's already working for me and maybe can work for you. If you find scenarios that it's causing problems, please let me know!

Happy hacking!

terça-feira, 30 de dezembro de 2014

Metaprogramming Ruby

I'm reading Metaprogramming Ruby 2: Program Like the Ruby Pros. I started to read it, because I guess that metaprogramming could make pieces of my code simpler, but I discovered that my problems are about abstractions. I just need better abstractions, nevertheless the concepts exposed in the book are very good and made me feel excited.

The first time I read about the Ruby object model I feel very impressed. Ruby is the first language that I learned with two concepts very new to me: dynamic typing and pure object oriented. So, understand how all the pieces fit together and get comfortable with the notion that everything is object, including classes, take some time to click.

I learn best by doing and now I'm experiencing an opportunity to apply metaprogramming (that I expect to blog in a near future). By doing and trying to understand what I was doing I can say that the Ruby object model clicked in my head, but one aspect that still confuses me a bit, and IMO isn't explained with great details in this book, is about method lookup.

How Ruby goes from the receiver of the calling to the location in the hierarchy of ancestors? One source of confusion was this picture:

I perceveid the object model in a very organized form: for each class, you have a singleton class and it's superclass is the singleton class of the class superclass. After you get it, it's not that complicated, but what I didn't perceived, was that in the middle of all this you can have modules!

This way, I was struggling in how I could add class methods to an existing class and also override some of it's methods. For example, for the following code:

module OneModule
  def my_method
    puts "OneModule.my_method"

class MyClass
  class << self
    include OneModule

How can I tweak my_method behavior? The first difficult that I had was map this code to the diagram above, since I didn't understand where the code was living.

After reading and re-reading the first chapters of Perrotta book, and tinkering with irb, I understand that it was going to the singleton class of MyClass.

This code provide insightful output:

p MyClass.ancestors
p MyClass.singleton_class.ancestors
p MyClass.methods(false) == MyClass.singleton_class.methods(false)
[MyClass, Object, Kernel, BasicObject]
[#, OneModule, #, #, Class, Module, Object, Kernel, BasicObject]

When you define a class method, it's stored in the singleton class. So if I would like to change a class method,  I have to change the singleton class, but this also puzzled me: how can I add it without removing the actual method?

From the output it's easy to see the solution, but it take me a while to realize that it was possible. I guess that the module methods where inserted in the singleton class, but the module is put as an ancestor of the singleton class.
With this knowledge in mind you can write the following:

module OtherModule
  def my_method
    puts "OtherModule.my_method"

class MyClass
  class << self
    include OtherModule

p MyClass.my_method
p MyClass.singleton_class.ancestors
[#, OtherModule, OneModule, #, #, Class, Module, Object, Kernel, BasicObject]

You can also call "super" in "OtherModule" and Ruby will chain the calls correctly.

After all, I still can't describe how method lookup works, and luckily someone blogged about it: Ruby's method lookup path, Part 1.

The algorithm can be resumed to the following:
  1. Methods defined in the object’s singleton class (i.e. the object itself)
  2. Modules mixed into the singleton class in reverse order of inclusion
  3. Methods defined by the object’s class
  4. Modules included into the object’s class in reverse order of inclusion
  5. Methods defined by the object’s superclass.
Armed with this, I'm by far better prepared to experiment and answer questions when they pop out. So nice, so good!

terça-feira, 16 de dezembro de 2014

Improved how to render coffeescript partial inside coffeescript template

Some time ago I googled trying to find a solution for render a coffeescript partial inside a coffeescript template in an attempt to DRY my coffeescript templates.

This led me to https://coderwall.com/p/i62phq/how-to-render-coffeescript-partial-inside-coffeescript-template. Initially I turn my nose for the syntax, but it solved my problem and wasn't that terrible so I stick with this.

After some time I need to debug my JS response and when I looked at the returned code I saw that I need a better solution:

(function() {
  (function() {
    $('.footer').html("<div class=\'btn-group\'>...<\/div>");

As you use more partials and your code start to grow this mess only gets worst. After hitting my head around a bit I realized that what I need was to render plain coffeescript, so I renamed my partial to _partial.coffee.erb and it worked like a charm!

Now my show.js.coffee is:

<%= render 'footer.coffee' %>

And my _footer.coffee.erb is:

$('.footer').html("<%= j render 'my_footer_html_partial' %>")

And the response is:

(function() {
  $('.footer').html("<div class=\'btn-group\'>...<\/div>");

Cleaner, don't you think?

segunda-feira, 8 de dezembro de 2014

Setup wildcard subdomain with Bind

I found a good documentation about configuring a DNS server to use wildcard domains for development: http://superrb.com/blog/2012/09/24/how-to-set-up-bind-on-ubuntu-for-a-wildcard-development-domain

All went well in my setup, I just have to adapt some paths and made use of bind conventions described in arch wiki.

All well, except the fundamental: wildcard subdomain. After googling a bit and read some similar questions in stackoverflow this was my final configuration:

  • /etc/named.conf
zone "dev.com" {
        type master;
        file "dev.com.zone";

  •  /var/named/dev.com.zone
; BIND data file for local loopback interface
$TTL 14400
@ IN  SOA dev.com. root.dev.com. (
            2014110801   ; Serial
                 86400   ; Refresh
                  7200   ; Retry
               3600000   ; Expire
                 86400 ) ; Negative Cache TTL
@ IN  NS  dev.com.
@ IN  A
* IN  CNAME dev.com.
@ IN  AAAA  ::1
If you don't catch the difference, I changed the line: "* IN A" with "* IN CNAME dev.com.".

PS.: This is NOT a tutorial, so read the post  linked in the start to make this works as you expect, since there contains what more you need to configure.

terça-feira, 3 de dezembro de 2013

Melhorando a performance dos seus testes com RSpec

Após começar a ficar entendiado e frustrado por ter de esperar quase 10 segundos para rodar um teste de integração, resolvi procurar alguma solução para o problema.

Já estava utilizando o servidor de DRb spork para ter o Rails já carregado na memória antes da execução dos testes, então resolvi pesquisar no google alguma solução para o meu problema.

Encontrei um ótimo post no ótimo blog The Carbon Emitter: http://blog.carbonfive.com/2011/02/02/crank-your-specs/ que leva a outro ótimo post em outro ótimo blog, o da 37signals: http://37signals.com/svn/posts/2742-the-road-to-faster-tests

Configurei o perftools.rb para rodar na minha máquina e como era de se esperar pelos posts acima, o garbage collector do estava sendo o vilão, segue a quantidade de chamadas na saída pprof.rb e o tempo, coletado com o comando time para executar apenas 1 teste:

      271  45.5%  45.5%      271  45.5% garbage_collector
      58   9.7%  55.3%       58   9.7% OpenSSL::PKCS5.pbkdf2_hmac_sha1
      20   3.4%  58.7%       21   3.5% Nokogiri::XML::XPathContext#evaluate
real 8.93
user 0.07
sys 0.00

Porém adotei uma abordagem um pouco diferente dos posts acima, alterei o arquivo spec/spec_helper.rb conforme o railscast #413 Fast Test (pro):

config.before(:each) { GC.disable }
config.after(:each) { GC.enable }

Com esta pequena mudança acabei conhecendo outro vilão, conforme mostra a saída a seguir, que é fruto da utilização do módulo confirmable do devise, porém já é possível perceber uma diminuição considerável no tempo do mesmo teste:

      58  16.9%  16.9%       58  16.9% OpenSSL::PKCS5.pbkdf2_hmac_sha1
      20   5.8%  22.7%       20   5.8% Nokogiri::XML::XPathContext#evaluate
      19   5.5%  28.2%       19   5.5% Regexp#===

real 6.57
user 0.07
sys 0.01

Não encontrei nenhuma forma de fazer com que o devise utilizasse uma função menos custosa, fui então dar uma olhada no código utilizado por eles e realizei o seguinte monkey patch através do arquivo spec/spec_helper.rb:

    class Devise::KeyGenerator
      def generate_key(salt, key_size=64)
        OpenSSL::PKCS5.pbkdf2_hmac_sha1(@secret, salt, 1, key_size)

Alterei o 3 argumento da função para 1, sendo que no código original este valor é de 65536 iterações realizadas pelo algoritmo para gerar a chave. Com esta alteração meu tempo sofreu novamente uma queda, porém desta vez sendo bem menos significativa:

      18   6.5%   6.5%       39  14.0% Psych::Parser#parse
      17   6.1%  12.5%       17   6.1% garbage_collector
      14   5.0%  17.6%       14   5.0% Regexp#===

real 6.27
user 0.08
sys 0.00

E pela saída apresentada pelo pprof.rb não existe mais o que atacar no código para melhorar a performance. Também é possível ter ganhos marginais utilizando o bloco within do capybara restringir os testes das páginas a regiões específicas do DOM. Ao se levar em conta milhares de testes é bom utilizá-lo como uma prática.

Passei então a imaginar o que poderia estar deixando meu código lento e imaginei que fosse a interação com o banco de dados, bingo! Configurei as seguintes opções do PostgreSQL para off, tornando-o praticamente um banco em memória (não faça isso em produção) e baixando o tempo para 3.74s:

fsync = off
synchronous_commit = off

real 3.74
user 0.09
sys 0.00

Uma melhora de quase 2,4x no tempo de execução de apenas 1 teste! Ao executar toda a suíte de teste, que ainda é bem pequena (possui apenas 35 testes) é que a diferença de velocidade se torna gritante: 26.5 x 8.85. E quanto mais testes eu tiver utilizando o banco de dados, pior para os exemplos sem as modificações no Postgres.

Neste meio tempo também troquei o spork pelo spring e adotei o guard-rspec. O spring faz a mesma tarefa para os testes que o spork faz, porém ele também otimiza a execução de outros comandos, como o rake routes, rake db:migrate, rails g, etc. Já o guard permite que meus testes rodem sem que eu tenha que sair do meu editor de texto e ainda me mostra uma bela notificação quando eles acabam de rodar!

Achei esta apresentação muito boa também, porém não vi nenhuma aplicação para os meus testes no estágio que eles estão atualmente, mas o speed up alcançado por eles é impressionante.

Estou muito mais feliz com o tempo do meu ciclo de BDD agora. E você, tem alguma dica para melhorar ainda mais esta performance?

terça-feira, 12 de fevereiro de 2013

Unicorn e Thin no Heroku

Depois de ver este post http://michaelvanrooijen.com/articles/2011/06/01-more-concurrency-on-a-single-heroku-dyno-with-the-new-celadon-cedar-stack/ decidi configurar uma aplicação no Heroku para utilizar o Unicorn ao invés do Thin.

Se você observar no post acima, existe um benchmark em relação ao tempo que levou os testes e a quantidade de requisições por segundo. Fiquei realmente animado com esta possibilidade para escalar aplicações, visto que cada dyno do heroku atende a uma requisição e a única possibilidade de antender mais requisições com o servidor Thin é aumentar o número de dynos.

O Unicorn foi desenvolvido para trabalhar com vários processos, fazendo fork de instâncias, deste modo em um único dyno é possível ter aproximadamente 3 processos, isso devido a limitação de memória oferecida por cada dyno que é de 512mb, porém se você adicionar mais um dyno você tem 6 processos rodando. Legal, não acha?

Com estas possibilidades animadoras resolvi fazer alguns testes para ver se utilizá-lo é realmente uma vantagem para deixar a resposta da aplicação mais rápida. Meu arquivo de configuração do Unicorn ficou exatamente como o deste post: http://stackoverflow.com/questions/9344788/am-i-preloading-the-app-in-heroku-unicorn-correctly

Para coletar os dados utilizei o comando httperf da seguinte maneira: httperf --hog --server meuservidor.com.br --ssl --ssl-no-reuse --num-conn=500 --ra=10 --timeout 30

Este comando irá fazer 500 conexões SSL com a página raiz do meu site (que não faz acesso a banco de dados), forçando handshake através da opção --ssl-no-reuse a uma taxa (--ra=10) de 10 conexões por segundo. Rodei o comando 3 vezes e fiz a média dos resultados obtidos para os seguintes valores de --ra: 10, 40, 80, 160, 320 e 500.

Então vamos aos dados obtidos:

Confesso que esperava mais do Unicorn, já que temos 3 processos atendendo requisições. A partir de 80 conexões ele estabeleceu em média 5 conexões a mais por segundo, sendo que em um dos testes o Thin conseguiu um desempenho um pouco melhor.
O tempo de resposta foi outra surpresa para mim, sendo que o Thin conseguiu ser mais rápido quando muitas conexões estão sendo estabelecidas por segundo.

Não vejo nenhuma vantagem clara de usar o Unicorn por enquanto, além de algumas vezes (raras) ter me deparado com o erro: httperf: failed to connect to SSL server (err=-1, reason=5).

Depois destes dados compilados, achei que fui muito suave e decidi ser mais agressivo em relação ao número de requisições. Requisitei 1500 conexões a uma taxa de 100 conexões por segundo (um site bem movimentado, não acha?). Nesta configuração ambos os servidores não conseguiram antender a todas as requisições (timeout), porém a performance de ambos os servidor foi semelhante, com o Unicorn atendendo um pouco mais de requisições (1233 x 1249), a taxa de conexão praticamente igual (20,8 x 21,6) assim como o tempo de resposta (3371ms x 3025ms).

Estes últimos testes atenderam mais de 1000 conexões concorrentes. Isso na versão para desenvolvedor do Heroku. Muito bom!

Ainda pretendo fazer testes em páginas que realizem consultas não triviais ao banco de dados para ver se os servidor apresentam tempos diferentes, mas isso fica para um próximo post...