After releasing version 0.1.0, I get the feeling I need to add one more feature to this gem, the ability to configure in global and per model. Actually, it’s a small change, but I haven’t had enough time to do it. Now, it’s complete.
In the example above, it will merge the global configuration options with the passed in options in has_uuid.
There’s a config generator that generates the default configuration file into config/initializers directory. Run the following generator command, then edit the generated file.
A couple week ago, while I was developing the active_record_uuid gem, I need to do extension to the existing methods in active_record library. The first one is to send portal_uuid instead of portal_id in the association methods such as has_many, …. The second one is about quoting uuid value and send it to mysql. I went to see the source code of active_record to find where the best place to do my job. After spending some times to understand the codebase, I find the place to overwrite, ActiveRecord::Associations::ClassMethods and ActiveRecord::ConnectionAdapters::Quoting. The first thing that comes to my head is to do alias_method_chain of these methods. I didn’t think much, write the test and implement it.
Read that code again, I realized there are two things that my code could be changed in the future. Those are the module and method’s name. If one day the Rails team decide to change the module name, then I would have to do another commit to fix that. In addition, method name is a bit longer and cumbersome as well.
I took one step back and think about it again. That module is simply included into ActiveRecord::Base. From here, it reminds about my ruby lesson. Ruby method lookup path is the reverse order of module inclusion. Why Ruby does that? Well, it is because it is designed to be extensible.
Therefore, if I define another module and include it very last, my method should be run before the original method. That’s cool, isn’t it? Calling the original version is even easier, call #super will does the job. It’s awesome.
The question arrives to my head. What is the use of alias_method_chain? What is it for? Well, in the above case, I wouldn’t need it, but it’s still useful to overwrite the methods which are defined originally in that class.
1234567891011
classPersondefhello"Hello"endendmoduleMyModuledefhello"hello from module"endend
There is no use if I includeMyModule into Person class, because Ruby will run the method inside that class, then the method inside the module. In this case, only alias_method_chain could help.
It’s hard to see any good guidelines to do extensions when developing the gem, but here it’s a better way to do the job.
For the past two weeks, I have been working on the active_record_uuid gem. It’s a nice experience because I have learnt quite a lot even though it’s a small gem. This gem provides uuid support to active_record model. It allows you to store uuid as binary, base64, hexdigest, and string. Actually, this gem is well-tested (55 examples). Fork me or create issue on GitHub if you see a bug.
The good thing about this gem is that you query the data by using a plain uuid string while you store as binary, base64, hexdigest, or string.
12345678910111213141516
post=PostBinary.create(:text=>"Binary uuid1")# INSERT INTO `post_binaries` (`created_at`, `text`, `updated_at`, `uuid`) VALUES ('2012-06-20 17:32:47', 'Binary uuid1', '2012-06-20 17:32:47', x'4748f690bac311e18e440026b90faf3c')post.uuid# "4748f690-bac3-11e1-8e44-0026b90faf3c"# it works as usual for finding recordsPostBinary.find_by_uuid(post.uuid)PostBinary.where(:uuid=>post.uuid)PostBinary.find(post)PostBinary.find(post.uuid)PostBinary.find([post.uuid])post.comments.create(:text=>"Comment 1")# access the value that stored in dbpost.reloadpost.attributes_before_type_cast["uuid"]["value"]
Just see the title, you would probably could answer this question very well. A module cannot be instantiated, is used to mixin, while a class can be instantiated, and so on. However, this blog post is not a tutorial at all. There is something else you could learn from it.
Last week, I paired with my boss, @jensendarren. He asked me a question about my code which I have never thought before. Why don’t you make ListingConverter as a module and use it as a mixin to the Listing class? Actually, he asked me the right thing because sometimes I write a module, some other times I write a class and using delegate. Here is my code.
At that time, I couldn’t answer him very well. I have read Rails Anti-Pattern book some months ago and I followed that book because it’s very convincing to me.
To be honest I never thought about using as a module or a class. I just understand that the author is trying to break responsibilities into multiple classes, Single Responsibilities Principle (SRP). Each class should have very specific use case and few reasons to change.
Thinking about SRP reminds me about blog post, SOLID Design Principles from Gregory Brown which I read it some months ago as well. For me, it’s an excellent blog post because it changes me quite a lot. I would recommend you go through it, at least SRP. My response will take it as a reference.
Actually, these two above code achieve the same result, but there is case where we should use one rather than the other. In this case, I would say a class wins over a module in terms of efficiency.
My ListingConverter class contains only 1 public method, #to_solr and almost 20 private methods. It is responsible for converting into solr json format.
If ListingConverter is a module, Listing would contain unneccessary methods, and if we have 50 mixined into, the Listing instance would become bigger and bigger objects, 200 methods. Imagine this case, what if each module has name collision? Then, it might be difficult to track down the case and find out what’s wrong in those 50 mixins. Usually, a Listing instance would not use to_solr method most of the time, but those additional methods from each module are always there which is not optimal at all. The thing will be worse when we load 5000 Listing instance at a time to do reporting for example because the memory would go up steadily.
Make ListingConverter as a class is more about Single Responsibility. This class just does only one thing, convert activerecord into solr. It would be better to treat Listing class like a big entity contains many small entities. It should forward the message requested to its contained objects. Finally, use delegate from Rails to make interaction a bit easier. What delegate does is that it defines methods that are passed in and forwards those methods to the :to object. You could do mannually as below.
123
defto_solrListingConverter.new(self).to_solrend
It only instantiates ListingConverter only when we call #to_solr method which is more efficient. At the end of the day, the Listing class contains only 1 additional public method.
Last week, we decided to remove sunspot gem from this new version App. Therefore, I rolled out a simple and small solr library.
I went through the ActiveRelation Walkthrough episode from RailsCasts long time ago, but now I have a chance to do something similar. I want the search response object lazy to load activerecord objects. I don’t want to call #results method the same as sunspot does. Actually, it’s a nice trick and simple to do it.
After sending the request to Solr, it will initialize the response object by passing solr response and the class to retreive the results.
The trick is to overwrite #inspect method so that when in the console, you will see the objects back. #to_a method is responsible loading the objects.
The question is that when to load? I want the caller be able to iterate through the collection by using standard Ruby enumerable methods such as each, inject, …… My solution is to overwrite method_missing by sending any undefined methods to #to_a method.
123456
@listings=Solr::Listing.text_search(params[:q],.....)@listings.class#Solr::Response@listings.total#1367, total founds from solr@listings.count#20, +count+ method on array@listings[0]#Listing class@listings.collect(&:name)
Everything works fine except when it renders the view. It shows this error [....] is not an ActiveModel-compatible object that returns a valid partial path.
1
<%=render@listings%>
After digging through google, it simply means it doesn’t know the partial path to render because it is my new object. Therefore, I just one more method in Solr::Response class. It’s pretty simple, actually. ActiveModel does the same thing, http://apidock.com/rails/v3.2.3/ActiveModel/Conversion/to_partial_path.
123
defto_partial_path@clazz._to_partial_pathend
There is still one last tiny problem, it seems Rails doesn’t know my new object is a collection object. It passes the whole object to each partial views. How does Rails knows how the passed objects is collection or single object? I digged out the rails source code. Actually, it checks with to_ary method, so I just alias method.
In the new project, we have built recently at Yoolk. I really enjoyed a lot of refactoring the app.
There are things which always bother me a lot is the if/else statements. I see them all the time. In my views, if/else should be used at the low level of coding. We should not use them too much because it doesn’t make the code readable.
I remembered vorleak, my coworker, and I are moderators in a study group long time ago about Principles of Refacoring. Two principles that really inspires me quite alot: less code == less bugs and write code for human, not for machine.
It looks simple to experienced Rails developers, but it’s useful for novice people. Here are some tips to reduce if/else statements:
Use find_or_initialize_by, find_or_create_by method
As the method name, it’s a cleaner way to get/create objects without if/else.
12345
# A shorter versionuser=User.find_or_initialize_by(params[:user_name])# A longer version with if statementuser=User.find_by_user_name(params[:user_name])user=User.new(:user_name=>params[:user_name])ifuser.nil?
The || is a common idiom in Ruby. However, it doesn’t work well if the first operand is empty string. The presence method will return nil instead of “” if the object is `blank?“, otherwise it return the actual object back.
1
host=config[:host].presence||'localhost'
Use default value
Use default value so that you don’t else clause.
123
# should do this way, it's more readable.subscription='normal'subscription='premium'ifcondition
Keep the if/else logic in fewer places
Wrap them in a function and reuse it where it is possible. Sometimes, it ‘s hard to extract it into function because they are slightly different. Try to write in general context, think about its behavior, and make it fit.
If you feel you are doing too much if/else, go back one step why you are doing that way. Try to use the correct objects that fit to your scenarios.
Here is my coworker’s version generating the last 12 months stats. He manipulates the string object.
I were often asked when to do unit test or integration test, so I decided to write this blog post to clarify:
Unit Test
By definition
it doesn’t talk to database
it doesn’t communicate across the network
it doesn’t touch the file system
Test a single module in isolation. We often use stub or mock its dependencies. When its tests fail, we know exactly because of itself.
A stub object is a fake object and not part of the test. It can be replaced by any other objects.
A mock object is also a fake object. A mock object contains behavior and it is part of what we want to test (collaborator).
12345678910111213
describeStatementdoit"logs a message on generate()"docustomer=stub('customer')customer.stub(:name).and_return('Aslak')logger=mock('logger')statement=Statement.new(customer,logger)logger.should_receive(:log).with(/Statement generated for Aslak/)statement.generateendend
The above test would fail if the logger doesn’t call #log with the specified parameter. customer is simply a fake object with no behavior in the test, and it cannot make the test fail. Remember this test is about the logging a message.
Kind of White-box testing. It tests internal workings of an application. You dictate the software that it should do this and do that.
Generally, private methods are already tested indirectly by public methods. You should not test it explicitly. However, if you feel that private method is crucious to make your class work correctly, consider give it a better name and promote it as public method.
Unit tests alone are not enough to make sure the application work correctly.
Much faster than integration test.
In Rails, there are functional tests: model spec, controller spec (it touches database).
Integration Test
It tests interaction between components to make sure these components work nicely with each other.
Kind of Acceptance Testing, it focuses on what the user see and how the user interacts with the system. It can be called Black-box testing where we don’t care how it is done. We care only the outcome.
The above test send the actual request to /tasks, fill out the form, and submit. It expects the content we filled out to be display in the page. If it’s a unit test (in this case, functional test aka. controller spec), we would test it differently. We won’t send the actual request, and we would assert it should receive #save on a @task object.
It’s better than unit test because it mimics real user behaviors and it tests the entire stack of the application.
Should we still write controller spec? The answer is yes for sad path and leave the happy path in the integration tests. Doing this make your tests a bit faster. Check this blog
Where to get started?
Should we write unit test first or integration test first? If you haven’t watched this episode from RailsCasts, watch it. He followed the outside-in development, starting from request spec, controller spec, and model spec.
When using rspec in my rails projects, I often add :focus => true on describe/context/it block and let guard running the tests on those new specs. I need to remove it from various places to run the test fully. It’s a bit annoying, actually. Therefore, I decide to write a ruby script just to do this thing:
What this code does is checking all the spec directory to remove them on every describe/context/it block. You would see I use very strict regular expression just to ensure I’m doing the write thing. I placed this script under script directory of my rails app. You may write this code better, but this is how i do it. I also create a rake script and add to hg before commit hook.
123456
namespace:specdodesc"remove :focus => true in all specs"task:remove_focusdorequire_relative'../../script/remove_focus'endend
I have been working at Yoolk for about 3 years and a half. It is an enjoyable and challenging workplace I have ever worked. Just to share my thoughts, my skills and how and where I should improve more.
After gone through several ruby books, I was convinced again and again to write the test before I code. Basically, I could do this since the day I joined the Mango team, however that was my bad habit to write the test after. It’s probably because I’m very curious and anxious to see the result rather than see the test. Actually, at that time my testing theory was so poor. It was probably a year after I get promoted, I spent quite a lot of time to read the Testing theory from various books, but still write the test after writing code.
Two months ago, I have gone through many links that describes how professional rubyists do while coding. I have learnt how to do intergration testing, cucumber, capybara, factory_girl, and rails 3. I feel I need to do some experiments, so I took a chance at my work to improve test coverage. Even though I could not do very well at the first time, I still write test later and open browser for testing at some times, it’s a very nice experience to follow. Testing drives me really well.
Another problem is that I often get distracted during working. Some people asked me to do this, other people have some questions. It’s just messing my workflow and the rythm. Sometimes, there are some urgent emails needed to reply quickly or servers are down. All sorts of those things make me less enjoyable to code. Thinking back again, what I could do is to write documentation more. Because I’m the one who writes the main API used by many people. Those people would need to consult with the documentation rather than me. Only the critical issues should come to me. That would helped me much. Just to write all documentation in one go is a boring task because now the API is fairly full-featured. Therefore, I took another chance to write a documentation on my new Audit API, and I would tell everyone go through that before asking me. Then, I would rest in peace while coding.
I have watched a few episodes from peepcode as well. I feel I was addicted to checking email, news, facebook, twitter. It is burnt me down sometimes. I got very little done. I reviewed my daily work, and I solved those problems. I took those offline while I’m coding. I only checked mail few times per day, checking facebook only when I finish a task. Then, everything is back to productivity.
The other thing I never done before is contributing to open source community. I have working with many oss projects, but I never contribute back. It’s because I don’t know how to do so, why I should do. Now, I understand why it’s important and quite useful to others. I have fixed a few bugs on the open source projects. the problem is that I need to learn in order to commit the code back. I used to ask my coworker how to do, but he didn’t explain me very well. In the next week, I would try to do on my own to contribute back of what I have done for my personal projects.
Here are some of my experience after working with REST web services for a few years. You may find it useful.
The Web Some people, like me previously, come from relational database won’t understand the benefit of the resource uri when talking about REST web service. They tend to realize more on the primary key of a database record rather than the resource uri. In the json response, the uri usually not being sent out to the client.
1
{"id":1,"name":"Chamnap","job":"developer"}
Most of the cases, the above response should work without any problems. It just arrives when the client API needs to send requests again to that API (maybe it needs more data). Which resource uri? No one remembers it. They need to reconstruct the uri by itself which is a bad thing.
Payload The other thing arises on the connection between resources. Look at request payload:
Here, the server should be able to parse the uri, location the correct resource, and save it. The same thing should happen on the GET request of the resource as well.
The Location Header in the response from server. In the case of 201 and 202 status code, this would benefit the client api, since they don’t have to construct the uri to request to see if that resource successfully processed.
1
Location:http://api.example.com/users/chamnap
Url Navigation One good example is pagination link on the resource. Again, the api client doesn’t have to build the uri in order to go next page. The main benefit of not building the uri on the client is that the server api is freely to change it without breaking the existing clients.
1
{"id":1,"from":"http://api.example.com/users/chamnap","message":"I like it","next_uri":"http://api.example.com/post/1/comments?page=2"}
HATEOAS in simple terms, means response from the server is dynamically bound to the context of the resource. For example, you just send a POST request to make an order. The response from should simple contain the current resource plus the uri to make payment or cancel this order. Doing this way, the client simply go through the uri provided by the server API.