Roku Developer Program

Join our online forum to talk to Roku developers and fellow channel creators. Ask questions, share tips with the community, and find helpful resources.
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
MarkRoddy
Visitor

BrightScript Testing

I'm finding the write code, push to my roku box, go through each feature and see if it works incredibly frustrating. To the point that I'm thinking about how I could create a unit testing harness so I can automate this process.

I am currently trying to parse a .pls file which has a simple syntax. Being a spoiled python programmer, I'm used to being able to "import configparser" but right now I have to roll my own. I doing so I need to write a lot of supporting routines in order to deal with parsing strings, all of which have well defined behavior which would be easy to write unit tests for, but I am currently stuck deploying and running to find each little error. A testing harness would be extremely useful in this scenario, but before I can write one there are some things I need to figure out first.

* Trapping Errors
If I'm going to report errors I need some capacity to trap them to prevent a harness from stopping execution. What is the equivalent to a try/except block in BrightScript? I've been through the docs and have not found anything that states such functionality explicitly. The closest I seemed to find is the documentation on the Run() function references running a test script as an example, but doesn't go into how one would perform testing oneself. The docs on the Eval() function seem like they might point to a path I could take, but it would probably be difficult without the next item.

* Reflection
In order gather a set of tests to be run there would need to be some capacity to inspect, ideally in a way similar to existing XUnit frameworks. Of course I could manually build up a set of tests into say an array using function references, but this is less ideal as it is time consuming and error prone do to leaving tests out that should be run.

Any advice on these issues or ideas would be greatly appreciated.
0 Kudos
6 REPLIES 6
MarkRoddy
Visitor

Re: BrightScript Testing

I've now had a few hours to hack at this, and for anyone interested here is what I've accomplished so far as well as where I think it can be taken.

I have coded up a set of "classes" in a traditional XUnit style with a test case class containing assertion methods as well as functionality for running, collecting results of, and reporting these results (currently printing to the debug console). I accomplished the error handling by having all of the assert methods call a single fail() member routine which stores an error message as an attribute and then performs i=1/0 to raise a runtime error. The runner executes the test fixture using eval(). If an error is seen via the get last error function first the attribute is check in which case the error was a failed assertion and is recorded as a 'test failure', if the attribute is not set than a runtime error in the test itself has been encountered so this is recorded as a 'test error'.

The major drawbacks of this approach is the lack of syntactical support for object oriented programming and the inability to do automated test aggregation due to lack of meta programming features. To work around the OOP issues I used a scheme where object allocation and initialization segmented into two different procedures. This allows a "class" to be subclassed by having the subclass call the initializer of its parent in its own initializer and then setting which ever attributes and methods are necessary. This allows for function overriding. The big downside of this approach is that the subclass still needs to 'attach' all it's own methods to the object. In the scope of a single class that will not be subclassed this isn't much of an issue, but it gets to be a pain very quick to have to implement an initializer where the test methods are specified twice not to mention how easy it is to forget to 'attach' a test method which would then not be executed. I don't think this is the right way to go in the long run unless formal syntax for OOP is added to BrightScript. I toyed with the idea of writing a pre-processor so that you could annotate methods as being members of a class and the necessary intializer could then be generated, but I don't think I want to put that much work to implement functionality that could likely be added to the core language in the future anyway. However, If it turns out that syntactic support for OOP is a highly desired feature by users and there is a firm confirmation that it will not be added in the future I will reconsider this position.

The alternative approach I'm thinking about currently is to use a more procedural approach (from the stand point of the test author) which I think would alleviate some of the difficult with writing tests in the OOP setup I described above, but would result in some implementations issues I haven't yet figured how to resolve. If tests are written in a purely procedural manner than test discovery would prove to be much simpler. Using some of the file-system functions a harness could look for files with some convention (for instance, all files whose names begin with 'test'), and than read and do basic parsing of the file to find all declared routines with a convention (again, say all subroutines whose names start with 'test') which will result in a list of methods that should be executed as tests. This would be much more complicated with the OOP approach as the lack of OOP syntax makes it difficult to tell 1) what method is the constructor for the 'class' and 2) what 'class' a particular method belongs to. This could probably be accomplished with a larger set of conventions such as having one test case "class" per file and assuming all "constructor" functions start with "new", but this seems unnecessarily restrictive to me. Though maybe I'm being too picky. I'd appreciate other people's opinion as to whether it is too restrictive.

With the procedural approach there are issues on the implementation side. For instance, how to discriminate between assertion failures and test errors. With the OOP approach I can store an error message on the object, but with a purely procedural approach where would/could I stick this? Seems like I'd be able to state that a test failed, but not for what reason. Additionally, due to lack of being to get some time of 'stack trace object' instead of just the line number where the error occured it would prove difficult to determine what assert actually failed. With the error message in the OOP approach I can atleast print the values that failed the assert. You could only tell that test X failed (as apposed to errored) so a lot of diligence would be required to ensure that there was only a single assert in each test. Which is generally a good practice, but quite an impedement if a requirement.

So this is where I am currently at. A lot of answers, but I've made some decent progress. Though I wouldn't be surprised if the end result looked nothing what I have so far. Any comments/questions/suggestions would be greatly appreciated.

-Mark
0 Kudos
MarkRoddy
Visitor

Re: BrightScript Testing

Great news, I settled on a design that works around a number of the issues I've had to deal with. After a few days of hacking away I now have a function unit testing framework working, including a small battery of unit tests for the framework. I've got some code clean up to do and then some docs to write on how to write tests in this framework (I had to make a few steps away from the traditional xUnit approach, but they're not too bad). Hopefully I should have everything ready to post in the next day or two, assuming no major distractions.

-Mark
0 Kudos
MarkRoddy
Visitor

Re: BrightScript Testing

"jdtangney" wrote:
"MarkRoddy" wrote:
Great news, I settled on a design that works around a number of the issues I've had to deal with. After a few days of hacking away I now have a function unit testing framework working, including a small battery of unit tests for the framework. I've got some code clean up to do and then some docs to write on how to write tests in this framework (I had to make a few steps away from the traditional xUnit approach, but they're not too bad). Hopefully I should have everything ready to post in the next day or two, assuming no major distractions.

-Mark


Mark, this is great news! I am a brand new (minutes only) developer on this platform, and I was a little surprised to see that BrightScript is the only game in town. As a hard-core test-driven developer, some flavor of xUnit is a must. I am very grateful to you for explaining this all to the Roku developer community, and I want to encourage you to publish your code.

I shall now sit down and reread the whole thread, and offer feedback if I can.

--johnt


John,
Glad to hear there's some interest. I'm big into TDD myself so this was a must. I have an announcement thread I recently posted with some more details on the initial release:
http://forums.rokulabs.com/viewtopic.php?t=25960

-Mark
0 Kudos
JamesChannel
Visitor

Re: BrightScript Testing

Hello, what program can I use to test a script function for errors? Thx
0 Kudos
Komag
Roku Guru

Re: BrightScript Testing

You have to sideload it to an actual Roku and use the debugger and any other print info you set up
0 Kudos
JamesChannel
Visitor

Re: BrightScript Testing

Okay, that's where I telnet into the Roku right and read the screen?

Have you had a chance to look at : viewtopic.php?f=34&t=94251 yet?
0 Kudos
Need Assistance?
Welcome to the Roku Community! Feel free to search our Community for answers or post your question to get help.

Become a Roku Streaming Expert!

Share your expertise, help fellow streamers, and unlock exclusive rewards as part of the Roku Community. Learn more.