Agile@School 2017 – When the projects come true

Agile@School 2017 – When the projects come true

Last time we had a great time. I’d have expected some trouble, some problem to manage. Well, everything has been well done. Anything goes. This is the reason why I’m attaching the pictures of the results hereafter, because it’s the best way to describe how the students were making their ideas. Keep in mind that we’re speaking about 18 years old guys, not startups!

Pictures paint a thousand words.

Introducing the teams:

The Messinesi team (Amanda and Alex) is developing a real time collaborative chat. Similar to the famous Slack, its purpose is to make the team’s member more aware of technologies used nowadays, like SignalR and the latest releases of the .Net framework. The name of the project is Notify. The guys are also following an interesting course with Visual Studio, in order to be prepared to become real developers in the future. As you see below, the development is still in progress, it’s just a matter of design. Due to the nature of the project itself, we need to wait for the next releases.

The Random team (Thomas and Luca) is presenting us a natural language bot, without any deep learning in this first release, which replies to a set of questions about Italian famous writers. It replies showing links, information and texts about the author requested by a real user. The name of the project is Italian Authors and it’s been made by integrating the Wit.ai APIs. It will eventually run on Facebook Messenger via Heroku platform. Lots of technologies!

The Scrubs team (Enea and Sebastiano) is working on a similar project, based on a natural language bot, without any deep learning in this first release, which replies to a set of questions about two topics which we should take care of, sports and photography. Also in this scenario, the real user interacts with the chatbot. The name of the project is CPP and it’s been made by integrating the Wit.ai APIs. It will eventually run on Facebook Messenger via Heroku.

 

The Domotic team (Nicodemo and Mattia), as the name of the team describes, is realising an IoT real time application which interacts with a prototype of a “smart house”. You can open and close doors and windows, turn on and off the lights. In this first release, you cannot clean the floor, but I guess we need more time for that feature… Actually the guys could integrate a robot 🙂 . The name of the project is Future House and it’s developed for Arduino using also php.

 

The Human Recognizers team (Marco and Francesco) is developing a face recognition Android app, which associate pictures of people in order to get information about age, mood and so on, also printing a string related to the mood itself. The name of the project is iFinder in consumes Android sdk and Face API by Microsoft. The result is impressive.

The Bar Santa team (Simone And Mirko) is hacking a remote-controlled car. As a result, they’ve got a super car with a camera onboard, stepped motors for wheels, and sensors. Everything mixed in a dedicated chassis, printed by the school’s 3D printer. The name of the project is SuperCar (do you remember Kit?) and it’s been made for Raspberry Pi3.

 

I don’t want to bother you with the teams’ retrospectives and ceremonies, but this time they worked perfectly. Each team started to discuss with us about the pros and cons of their choices, the things to do and the ones to avoid in the future. They depicted everything using starfish diagrams (in italian):

 

What can I say? AWESOME! This is how I’m feeling right now. Lots of ideas, technologies, integrations, and definitely FUN! I hope that everyone else agrees with me.

I guess that the exam will be a great show for those guys, also. The next post will be about the pitch videos they’ll create for each project. I’m excited but I have to bite my tongue, because I’ve already seen something and I don’t want to spoil anything, so…

Stay Tuned!

SQL Server Latest Updates (May. 2017)

Directly from the SQL Server Release Service blog, here the latest updates for SQL Server 2012 sP3 and 2016:

Cumulative Update #9 for SQL Server 2012 SP3

Cumulative Update #6 for SQL Server 2016 RTM

Cumulative Update #3 for SQL Server 2016 SP1

Also, you can download the Microsoft Azure Database Management Pack (6.7.28.0) here.

Stay Tuned! 🙂

SQL Server Latest Updates (Apr. 2017)

Directly from the SQL Server Release Service blog, here the latest updates for SQL Server 2014:

Cumulative Update #12 for SQL Server 2014 SP1

Cumulative Update #5 for SQL Server 2014 SP2

and then, an interesting link on SQL Server Native Client, which explains better the SNAC lifecycle.

Stay Tuned! 🙂

Agile@School 2017 – obstacles on the path

Hello everyone,

As mentioned in the previous post, the project has been started and we’ve reached the “fourth episode”. This time, Alessandro and I were able to talk with students in order to get which kind of project they are going to complete and show during the final exam.

Like every other project, problems are behind the corner. Indeed, the students didn’t create any task under the related Product Backlog Items. This is what they should have done. The issues were principally:

  • some teams still not had figured any idea to implement
  • some other weren’t able to use Visual Studio Team Services (VSTS) in the right way (or few of them simply didn’t wanted to 🙂 )

That’s it, so our last meeting was focused on explaining the advantages of agile methodologies instead of the classic waterfall approach, showing them how to use VSTS correctly in order to clarify any doubt about its use. We started to speak about methodologies because they used to waterfall their project, and this means that they were gathering ideas, instead of thinking in an iterative way.

Although this fact, we could see the first results from the majority of the students which let us being confident about the future of their projects.

As result of this talk, the students seemed to have got why choose a methodology instead of another, being able to manage their work with the right tools. At the end of this day, we’ve assigned them just a simple homework: create the tasks which reflects their development steps, moving PBIs through different status during their work.

That was all for this episode. Stay tuned for any news about the course of the project.

See you to the next post!

Learn How SQL Server Large Transaction Log Affect Performance

Whenever a user adds, delete, or edit any record in the SQL Server all the changes are maintained in the transaction log. In addition, the background process keeps on writing the each and every transaction in the log and to the database. After that, the transaction log is marked as written. There are two types of recovery model i.e. Simple and Full Recovery Model is used by the database in the SQL Server. However, if the database uses full recovery model all the written transaction logs are maintained in the log. Moreover, it becomes mandatory for the user to manage the transaction log file and take backup on the daily basis. In addition, the transaction log files are big enough if the full recovery model is used for the database. Apart from this, the Transaction logs in the SQL server are the always auto-growth and it is also advised not to turn off the auto-growth feature. Because it is helpful in the case of emergencies.

More About the SQL Server Transaction Log Size

The Transaction Log files in the SQL Server database is made up of one or more physical files and SQL Server writes to one physical transaction log file at a time. The internal structure of the physical files that is used by the SQL Server for transaction log files is known as Virtual Log Files (VLFs). However, the number and size of VLFs files inside the transaction log files directly depend on the number of factors. The size of VLFs files is determined at the time when the transaction log file is created or extended.

Effect of Large Transaction Log in SQL Server Performance

SQL Server transaction log file is comprised of small parts known as virtual log files whose size is not fixed. However, the main motive is to keep the small number of the virtual log files for the transaction log file. It is because SQL Server manages the smaller number of files more easily.

  • If there are a huge number of virtual log files then, there can be two possible reasons behind this. One is the small transaction log that has grown up (manually or automatically) in very small segments and another is a problematic situation where the large growth segments were configured but accidentally small VLFs were configured in the Transaction log. However, if the Virtual Log Files grows unnecessarily large due to auto growth, the logs become fragmented and may results in delay also. Moreover, it also slows down the recovery process that is why having so many or very little virtual log files results in bad performance.
  • Apart from all this, the auto-growth option is also offered, which is turned on by default. If the auto-growth settings are not handled in an appropriate manner, a SQL Server database will be forced to auto-grow that may lead to the serious performance issues. It is because the SQL Server will halt all processing until the auto-grow event is completed. However, the auto-growth event will take lots of space due to the physical organization of the hard drive that is not close physically to the previous one occupied by the transaction log file. This results in the physical fragmentation of the files that also causes slower response.
  • It is always suggested to backup the transaction logs regularly. However, if the backup process fails, then the log files will grow largely and left with over-sized transaction file. It is because old transaction logs are not removed which makes transaction logs increase at a rapid rate. If the SQL Server database is having a large transactions logs then, it has a bad effect on the performance of the SQL Server such as:
  1. If the transaction log file is full in the SQL Server database, it degrades the performance of the SQL Server.
  2. It also slows down the speed of the transactional log backup process.
  3. In addition, the over-sized transaction logs decrease the disk space also because old transaction logs are not removed yet.

How to Resolve Over-sized Transnational Log Problem?

The number of VLFs files is increased by an auto-grow event, that has a common process but requires strict rules to prevent the unplanned issues with space or unresponsiveness. Therefore, to reduce the bad effect of over-sized transaction log files on the performance of the SQL Server, it is necessary to resolve the issue.

The most common solution is that reduce the number of virtual log files in the transaction logs files. Now, to do the same follow the three simple steps discussed below:

  1. Backup the Transaction Log files.
  2. After that, shrink the transaction log files.

It is because the number of virtual log files in the transaction logs is reduced by shrinking the SQL Server transaction log file that needs strict rules also to avoid deleting the data that has not been backed up till now.

Conclusion

Transaction Log files are the most important log files in any SQL Server database, which grows always because the auto-growth option is turn on by default. The internal structure of the transaction log file has many virtual log files. However, if there are an excessive number of VLFs then, it has some bad effects on the performance of SQL Server. Therefore, it is necessary to properly control the auto-growth feature of the transaction log files.

SQL Server Latest Updates (Mar. 2017)

Directly from the SQL Server Release Service blog, here the latest updates for SQL Server 2014:

Cumulative Update #5 for SQL Server 2016 RTM

Cumulative Update #2 for SQL Server 2016 SP1

and then, again, new public preview of Azure SQL Database Management Pack!

Stay Tuned! 🙂

Agile@School 2017 – let’s start over

Agile@School 2017 – let’s start over

As a recurring project, Agile@School is started again on February, with a new set of projects and ideas. Gabriele will help me again, but it will be a very difficult task. During the past year we followed a Scrum approach, in order to comply the team structure. As you can read here, there were one team with a small bunch of members. Now, we’re getting “bigger”. As a result, we’ll have micro-teams of two/three member each. Great chance for Kanban. Let’s give it a try.

01

How will we approach in the beginning?

  • defining a set of micro-team, that we call “task forces”
  • designing a Kanban board
  • describing personas
  • speaking of some ceremonies we’d like to get rid of
  • speaking of some ceremonies we’ll keep
  • describing the customer journey and the story map practices

The task forces

The term not fits very well, actually; indeed, a task force is something that could be considered as a “defcon 1” team. However, we would give the teams a label which is “strong”. To be honest, we have a little amount of time, so in the end we can say that we’re in hurry already 🙂

The Kanbard board

As we said above, we will have more task forces, most likely six. Therefore, the board will use columns (as usual) for the status management and rows (aka Swimlanes) for separating teams and projects.

02

The board will be created in Visual Studio Team Services, in order to use also the Source Control Manager which relies on it.

Personas

Each team member will populate a simple card, the Persona card, which is depicted in the picture below:

03

As you can see (in Italian), the first column is for Persona details, the second for interests and the third is the “role” which the member would like to have. I know that the last column is not included in any best practice, but I feel that some student could start to think about its job and its future. Could be interesting.

The customer journey

During the next meeting, we’ll ask the students to show us their customer journey. Each team will have to describe the journey of a typical user, with mood for each action it takes and the value which it gets by the action itself.

Conclusions

Kanban, task forces, boards, customer journey, personas, etc. This year is full of new things to get knowledge from. Also the source control manager will change. We will use git on VSTS so we will get different projects in the same place in a quicker way.

And now, let’s start over! 🙂