Aquí Hay Trabajo

Empresa con experiencia en la asistencia a las personas busca franquiciados nacionales (internacionales en un futuro próximo), para ofrecer sus servicios a las familias, mayores y niños, que resuelven cualquier imprevisto en nuestra rutina diaria: Salud, colegio, viajes, hogar, etc.

domingo, 27 de septiembre de 2020

Domain Authority 50 for your website - Guaranteed Service

We`ll get your website to have Domain Authority 50 or we`ll refund you every
cent

for only 150 usd, you`ll have DA50 for your website, guaranteed

Order it today:
http://www.str8-creative.co/product/moz-da-seo-plan/

thanks
Alex Peters

miércoles, 23 de septiembre de 2020

Tech Book Face Off: Data Smart Vs. Python Machine Learning

After reading a few books on data science and a little bit about machine learning, I felt it was time to round out my studies in these subjects with a couple more books. I was hoping to get some more exposure to implementing different machine learning algorithms as well as diving deeper into how to effectively use the different Python tools for machine learning, and these two books seemed to fit the bill. The first book with the upside-down face, Data Smart: Using Data Science to Transform Data Into Insight by John W. Foreman, looked like it would fulfill the former goal and do it all in Excel, oddly enough. The second book with the right side-up face, Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow by Sebastian Raschka and Vahid Mirjalili, promised to address the second goal. Let's see how these two books complement each other and move the reader toward a better understanding of machine learning.

Data Smart front coverVS.Python Machine Learning front cover

Data Smart

I must admit; I was somewhat hesitant to get this book. I was worried that presenting everything in Excel would be a bit too simple to really learn much about data science, but I needn't have been concerned. This book was an excellent read for multiple reasons, not least of which is that Foreman is a highly entertaining writer. His witty quips about everything from middle school dances to Target predicting teen pregnancies were a great motivator to keep me reading along, and more than once I caught myself chuckling out loud at an unexpectedly absurd reference.

It was refreshing to read a book about data science that didn't take itself seriously and added a bit of levity to an otherwise dry (interesting, but dry) subject. Even though it was lighthearted, the book was not a joke. It had an intensity to the material that was surprising given the medium through which it was presented. Spreadsheets turned out to be a great way to show how these algorithms are built up, and you can look through the columns and rows to see how each step of each calculation is performed. Conditional formatting helps guide understanding by highlighting outliers and important contrasts in the rows of data. Excel may not be the best choice for crunching hundreds of thousands of entries in an industrial-scale model, but for learning how those models actually work, I'm convinced that it was a worthy choice.

The book starts out with a little introduction that describes what you got yourself into and justifies the choice of Excel for those of us that were a bit leery. The first chapter gives a quick tour of the important parts of Excel that are going to be used throughout the book—a skim-worthy chapter. The first real chapter jumps into explaining how to build up a k-means cluster model for the highly critical task of grouping people on a middle school dance floor. Like most of the rest of the chapters, this one starts out easy, but ramps up the difficulty so that by the end we're clustering subscribers for email marketing with a dozen or so dimensions to the data.

Chapter 3 switches gears from an unsupervised to a supervised learning model with naïve Bayes for classifying tweets about Mandrill the product vs. the animal vs. the Mega Man X character. Here we can see how irreverent, but on-point Foreman is with his explanations:
Because naïve Bayes is often called "idiot's Bayes." As you'll see, you get to make lots of sloppy, idiotic assumptions about your data, and it still works! It's like the splatter-paint of AI models, and because it's so simple and easy to implement (it can be done in 50 lines of code), companies use it all the time for simple classification jobs.
Every chapter is like this and better. You never know what Foreman's going to say next, but you quickly expect it to be entertaining. Case in point, the next chapter is on optimization modeling using an example of, what else, commercial-scale orange juice mixing. It's just wild; you can't make this stuff up. Well, Foreman can make it up, it seems. The examples weren't just whimsical and funny, they were solid examples that built up throughout the chapter to show multiple levels of complexity for each model. I was constantly impressed with the instructional value of these examples, and how working through them really helped in understanding what to look for to improve the model and how to make it work.

After optimization came another dive into cluster analysis, but this time using network graphs to analyze wholesale wine purchasing data. This model was new to me, and a fascinating way to use graphs to figure out closely related nodes. The next chapter moved on to regression, both linear and non-linear varieties, and this happens to be the Target-pregnancy example. It was super interesting to see how to conform the purchasing data to a linear model and then run the regression on it to analyze the data. Foreman also had some good advice tucked away in this chapter on data vs. models:
You get more bang for your buck spending your time on selecting good data and features than models. For example, in the problem I outlined in this chapter, you'd be better served testing out possible new features like "customer ceased to buy lunch meat for fear of listeriosis" and making sure your training data was perfect than you would be testing out a neural net on your old training data.

Why? Because the phrase "garbage in, garbage out" has never been more applicable to any field than AI. No AI model is a miracle worker; it can't take terrible data and magically know how to use that data. So do your AI model a favor and give it the best and most creative features you can find.
As I've learned in the other data science books, so much of data analysis is about cleaning and munging the data. Running the model(s) doesn't take much time at all.
We're into chapter 7 now with ensemble models. This technique takes a bunch of simple, crappy models and improves their performance by putting them to a vote. The same pregnancy data was used from the last chapter, but with this different modeling approach, it's a new example. The next chapter introduces forecasting models by attempting to forecast sales for a new business in sword-smithing. This example was exceptionally good at showing the build-up from a simple exponential smoothing model to a trend-corrected model and then to a seasonally-corrected cyclic model all for forecasting sword sales.

The next chapter was on detecting outliers. In this case, the outliers were exceptionally good or exceptionally bad call center employees even though the bad employees didn't fall below any individual firing thresholds on their performance ratings. It was another excellent example to cap off a whole series of very well thought out and well executed examples. There was one more chapter on how to do some of these models in R, but I skipped it. I'm not interested in R, since I would just use Python, and this chapter seemed out of place with all the spreadsheet work in the rest of the book.

What else can I say? This book was awesome. Every example of every model was deep, involved, and appropriate for learning the ins and outs of that particular model. The writing was funny and engaging, and it was clear that Foreman put a ton of thought and energy into this book. I highly recommend it to anyone wanting to learn the inner workings of some of the standard data science models.

Python Machine Learning

This is a fairly long book, certainly longer than most books I've read recently, and a pretty thorough and detailed introduction to machine learning with Python. It's a melding of a couple other good books I've read, containing quite a few machine learning algorithms that are built up from scratch in Python a la Data Science from Scratch, and showing how to use the same algorithms with scikit-learn and TensorFlow a la the Python Data Science Handbook. The text is methodical and deliberate, describing each algorithm clearly and carefully, and giving precise explanations for how each algorithm is designed and what their trade-offs and shortcomings are.

As long as you're comfortable with linear algebraic notation, this book is a straightforward read. It's not exactly easy, but it never takes off into the stratosphere with the difficulty level. The authors also assume you already know Python, so they don't waste any time on the language, instead packing the book completely full of machine learning stuff. The shorter first chapter still does the introductory tour of what machine learning is and how to install the correct Python environment and libraries that will be used in the rest of the book. The next chapter kicks us off with our first algorithm, showing how to implement a perceptron classifier as a mathematical model, as Python code, and then using scikit-learn. This basic sequence is followed for most of the algorithms in the book, and it works well to smooth out the reader's understanding of each one. Model performance characteristics, training insights, and decisions about when to use the model are highlighted throughout the chapter.

Chapter 3 delves deeper into perceptrons by looking at different decision functions that can be used for the output of the perceptron model, and how they could be used for more things beyond just labeling each input with a specific class as described here:
In fact, there are many applications where we are not only interested in the predicted class labels, but where the estimation of the class-membership probability is particularly useful (the output of the sigmoid function prior to applying the threshold function). Logistic regression is used in weather forecasting, for example, not only to predict if it will rain on a particular day but also to report the chance of rain. Similarly, logistic regression can be used to predict the chance that a patient has a particular disease given certain symptoms, which is why logistic regression enjoys great popularity in the field of medicine.
The sigmoid function is a fundamental tool in machine learning, and it comes up again and again in the book. Midway through the chapter, they introduce three new algorithms: support vector machines (SVM), decision trees, and K-nearest neighbors. This is the first chapter where we see an odd organization of topics. It seems like the first part of the chapter really belonged with chapter 2, but including it here instead probably balanced chapter length better. Chapter length was quite even throughout the book, and there were several cases like this where topics were spliced and diced between chapters. It didn't hurt the flow much on a complete read-through, but it would likely make going back and finding things more difficult.

The next chapter switches gears and looks at how to generate good training sets with data preprocessing, and how to train a model effectively without overfitting using regularization. Regularization is a way to systematically penalize the model for assigning large weights that would lead to memorizing the training data during training. Another way to avoid overfitting is to use ensemble learning with a model like random forests, which are introduced in this chapter as well. The following chapter looks at how to do dimensionality reduction, both unsupervised with principal component analysis (PCA) and supervised with linear discriminant analysis (LDA).

Chapter 6 comes back to how to train your dragon…I mean model…by tuning the hyperparameters of the model. The hyperparameters are just the settings of the model, like what its decision function is or how fast its learning rate is. It's important during this tuning that you don't pick hyperparameters that are just best at identifying the test set, as the authors explain:
A better way of using the holdout method for model selection is to separate the data into three parts: a training set, a validation set, and a test set. The training set is used to fit the different models, and the performance on the validation set is then used for the model selection. The advantage of having a test set that the model hasn't seen before during the training and model selection steps is that we can obtain a less biased estimate of its ability to generalize to new data.
It seems odd that a separate test set isn't enough, but it's true. Training a machine isn't as simple as it looks. Anyway, the next chapter circles back to ensemble learning with a more detailed look at bagging and boosting. (Machine learning has such creative names for things, doesn't it?) I'll leave the explanations to the book and get on with the review, so the next chapter works through an extended example application to do sentiment analysis of IMDb movie reviews. It's kind of a neat trick, and it uses everything we've learned so far together in one model instead of piecemeal with little stub examples. Chapter 9 continues the example with a little web application for submitting new reviews to the model we trained in the previous chapter. The trained model will predict whether the submitted review is positive or negative. This chapter felt a bit out of place, but it was fine for showing how to use a model in a (semi-)real application.

Chapter 10 covers regression analysis in more depth with single and multiple linear and nonlinear regression. Some of this stuff has been seen in previous chapters, and indeed, the cross-referencing starts to get a bit annoying at this point. Every single time a topic comes up that's covered somewhere else, it gets a reference with the full section name attached. I'm not sure how I feel about this in general. It's nice to be reminded of things that you've read about hundreds of pages back and I've read books that are more confusing for not having done enough of this linking, but it does get tedious when the immediately preceding sections are referenced repeatedly. The next chapter is similar with a deeper look at unsupervised clustering algorithms. The new k-means algorithm is introduced, but it's compared against algorithms covered in chapter 3. This chapter also covers how we can decide if the number of clusters chosen is appropriate for the data, something that's not so easy for high-dimensional data.

Now that we're two-thirds of the way through the book, we come to the elephant in the machine learning room, the multilayer artificial neural network. These networks are built up from perceptrons with various activation functions:
However, logistic activation functions can be problematic if we have highly negative input since the output of the sigmoid function would be close to zero in this case. If the sigmoid function returns output that are close to zero, the neural network would learn very slowly and it becomes more likely that it gets trapped in the local minima during training. This is why people often prefer a hyperbolic tangent as an activation function in hidden layers.
And they're trained with various types of back-propagation. Chapter 12 shows how to implement neural networks from scratch, and chapter 13 shows how to do it with TensorFlow, where the network can end up running on the graphics card supercomputer inside your PC. Since TensorFlow is a complex beast, chapter 14 gets into the nitty gritty details of what all the pieces of code do for implementation of the handwritten digit identifier we saw in the last chapter. This is all very cool stuff, and after learning a bit about how to do the CUDA programming that's behind this library with CUDA by Example, I have a decent appreciation for what Google has done with making it as flexible, performant, and user-friendly as they can. It's not simple by any means, but it's as complex as it needs to be. Probably.

The last two chapters look at two more types of neural networks: the deep convolutional neural network (CNN) and the recurrent neural network (RNN). The CNN does the same hand-written digit classification as before, but of course does it better. The RNN is a network that's used for sequential and time-series data, and in this case, it was used in two examples. The first example was another implementation of the sentiment analyzer for IMDb movie reviews, and it ended up performing similarly to the regression classifier that we used back in chapter 8. The second example was for how to train an RNN with Shakespeare's Hamlet to generate similar text. It sounds cool, but frankly, it was pretty disappointing for the last example of the most complicated network in a machine learning book. It generated mostly garbage and was just a let-down at the end of the book.

Even though this book had a few issues, like tedious code duplication and explanations in places, the annoying cross-referencing, and the out-of-place chapter 9, it was a solid book on machine learning. I got a ton out of going through the implementations of each of the machine learning algorithms, and wherever the topics started to stray into more in-depth material, the authors provided references to the papers and textbooks that contained the necessary details. Python Machine Learning is a solid introductory text on the fundamental machine learning algorithms, both in how they work mathematically how they're implemented in Python, and how to use them with scikit-learn and TensorFlow.


Of these two books, Data Smart is a definite-read if you're at all interested in data science. It does a great job of showing how the basic data analysis algorithms work using the surprisingly effect method of laying out all of the calculations in spreadsheets, and doing it with good humor. Python Machine Learning is also worth a look if you want to delve into machine learning models, see how they would be implemented in Python, and learn how to use those same models effectively with scikit-learn and TensorFlow. It may not be the best book on the topic, but it's a solid entry and covers quite a lot of material thoroughly. I was happy with how it rounded out my knowledge of machine learning.

martes, 22 de septiembre de 2020

Bimonthly Progress Report For My Twitch Channel, FuzzyJCats, March 2 To July 1

Twitch Channel FuzzyJCats

After a hiatus due to my friends being in hospital, when I came back to streaming, it finally dawned on me to not care about viewer numbers - at last! Even as I theoretically realize that ignoring numbers is a major solution to preventing burn out, there was always a part of me that cared, due to ego issues.

As a result, I was watching a lot of Twitch streams as that's the best way to grow numbers (i.e. networking = making friends), that it's caused migraines, not to mention, feeling imprisoned because I "have" to watch streams for numbers.

In other words, I was afraid to limit Twitch viewing as it may lead to decrease in numbers. However, during the period of time when I was visiting and supporting my friends, I didn't have time to watch Twitch, and despite this very sad period, I felt physically better due to lack of migraines.

When I came back to streaming, I had such a huge outpouring of support that I finally internalized deep down that it's my viewers who are the most important, not reaping numbers. Because of my viewers (and it can never be overstated, your viewers make your stream), I was able to internalize this completely. 

Indeed, one of the most deadly things you can do as a streamer is to take your audience for granted. Having all this support and love from my community make streaming worth it, and that certainly prevents burn-out. I must never forget how I felt when my community was there for me when I came back to streaming.

I'm not sure if my viewers noticed that I was more spontaneous and free during this time, but I felt like a burden was lifted from my shoulders. Indeed, you can ad lib more when you don't have to worry about turning off viewers. Although I enjoyed streaming before this revelation, my joy was constricted by worrying about concurrent viewers. Now, without my joy being choked off, I feel liberated while streaming.

Interestingly, since not being concerned about numbers, I noticed that I was able to stream just as well as ever. In fact, I may have been better at streaming since I had the same (or perhaps better) mental focus despite not exercising. Almost all of my past streams, I made sure to exercise. I think having this psychological freedom made streaming less taxing. When you're truly happy in what you're doing, in other words, you're able to be more effective.

It took almost a year to get to a place where I finally know and feel that numbers are irrelevant, and that's a breath of fresh air. In the meantime, I've come up with strats, a flowchart that I follow, that help guide my stream:

If there's someone commenting, stop everything and talk to the person (though remember what you were talking before that and continue that thought process).

Once talking to the person, if there's a pause in chat, go back to previous thought and complete the thought.

If there's no one there, comment on gaming action (why you did this, what you're going to do, how you feel about the cinematic cut scenes and the like), or tell interesting stories (more on that later)

Use load screens to catch up on chat and talk.

It gets rather stale talking about the same points (i.e. gaming action and streaming issues), so to improve content, discussing life experiences are key as you can fill dead air by telling stories. Talking about life experiences is material enough.

Further, these experiences don't have to be unique and exciting ones - often any common mishaps that you experience can be told very humorously. Making people laugh is one of the best forms of entertainment.

I never had reason to be a story teller, so being an entertaining story teller is a skill that I'll be working on. This is an entirely new and exciting new adventure that I'll be experiencing!

Progress made:
  • Truly not caring about concurrent viewer numbers (finally!).
  • Realizing that story telling can make streams more compelling.
  • Putting in scheduled vacations, and notifying community, to prevent burn-out.
Improvements to be made:
  • Be a better story teller.
  • Thank new viewers for stopping by stream (use cbenni.com chat log to review chat history) - I was consistent in past, but not currently.
  • Completing my thought processes and sentences (I have a tendency to do this IRL as well).
  • Get back into exercising and self-care.
  • The usual being able to chat and game at same time (this is not habit yet).
  • The usual decreasing filler words, vocal "tics" and the like.

domingo, 13 de septiembre de 2020

Two Types Of Game Stores

Hobby game stores are the tip of the iceberg. They were once the whole iceberg, introducing new customers, catering to veteran customers, and acting as taste makers. They did it all. The store owner decided which game you would play, and publishers would do their best to place ads in magazines or show things at conventions to convince customers otherwise. The stores were never powerful, but they were strong influencers and with little competition, they grew lazy. Epically so.

Right now, hobby game stores are as numerous and prosperous as they've ever been. However, the hobby has grown so huge in the last decade, and the Internet such a powerful force, they struggle to remain relevant. I struggle to just keep up with customer demands, and only occasionally flex my muscles as taste maker.

This is not to say brick and mortar stores are dying or having problems, which is their natural state, it just means they're trying to find their position in the changing marketplace, where Amazon has steadily gobbled up game trade market share and now owns, what 80%? Who really knows. Many game stores are selling on Amazon with a, "if you can't beat them, join them" strategy. So stores struggle with how to approach this perilous new world, where the Internet dominates as a sales channel, with Amazon and direct to consumer sales being the primary means of commerce. It's such a powerful force, it not only drives customers to us, but they arrive with a different idea of how the games are played.

There are two primary strategies to stay relevant as a hobby game store, serve the lowest common denominator or serve the highest common denominator. When I say highest, I refer to the intense amount of retail work required to bring in new customers, expose them to a broad variety of games, and later watch them wander off to Internet sales once educated. It's game store ownership as parenting. It's time consuming, expensive, and only works because nobody big is dumb enough to try. It's the full spectrum, high capitalization approach.

Deciding on being the highest common denominator requires a serious capital budget, strong sales training, and a local market where this is possible. Most scorched earth regions, characterized by close to free real estate and a customer based trained to pick apart newcomers, need not apply. There is a strength to this model, but there's also the eternal question of, if you have enough money to do this right, why would you do it at all? When I mention the scorched earth issue with scorched earth store owners, they have no idea what I'm talking about. Scorched earth is the game trade in many regions. It would be like asking a convention of ice cream store owners to consider a world without refrigeration. Sucky stores exist to serve a sucky market.

The lowest common denominator is serving the most profitable customers right now. It's a supremely logical business model, unlike the high store. You identify the lowest hanging fruit, the maximum value for the least effort, and you serve that. You serve it all the time in every way possible. You don't invest in fancy fixtures or worry too much about Kickstarter or Dungeons & Dragons table acreage. Every D&D table of players is worth one Magic player, and you make no bones about it. You serve the beast that feeds you. I should mention a good LCD store is just as well capitalized and the owners just as smart and clever as the HCD store. They just satisfy different needs in the marketplace.

The lowest common denominator store serves Magic to Magic players in every Magic configuration imaginable. You have events for every format, you sell tons of singles and have a war chest of cash reserves for buying cards from customers that would make a marijuana dispensary nervous. Where the high road store spent a small fortune on fixtures, trained staff and diverse inventory, the low road store has a shockingly large collection of used cardboard. That other expensive stuff? It's just not necessary. That means lots of singles sold in store and online and deep discount pricing on sealed product because you're essentially selling a commodity item, like soy beans. If you could buy stock in Lifetime Products, Inc., you would. You are not concerned with margin, only the market price.

Both models work. However, imagine if you were trying to grow your market as a publisher. Do you want the image of where your game is played to be that of a dirty den of dudes or a professional enterprise that welcomes all new people? Do you want to be associated with a pawn shop or Neiman Marcus? You created the marketplace where the dirty dude model worked best, but you no longer need them to sell things, just act as an onramp to your hobby game. Your own child is a delinquent and now that they've grown up, you're tired of them hanging out at your house, eating your food.

The game trade is headed in a direction that rewards the highest common denominator store because publishers are primarily interested in image, not sales volume from this increasingly insignificant sales channel. The ability of a store to sell lots of a product is literally none of a publishers business, other than knowing people come to buy it there. Supporting stores is just a marketing expense now, not a requirement for economic survival, and nobody wants to spend money on representing a poor image. It does not mean the high stores will get any sort of real sales benefit, any guarantee of meat on the bone, but when there are bones thrown, they'll get them first.

We are at the point where there is a push to transform the lowest common denominator stores into something more presentable, while rewarding highest common denominator stores with perks to help showcase publisher brands in these locations. Again, sales are irrelevant other than a marketing indicator. Is it financially feasible to transform your store? Even a very good store might spend thousands of dollars to attain what's considered great, but will it result in stronger sales? Not necessarily, and although that might be the store owners goal, it's not the goal of the publisher.

Will customers appreciate the change. It turns out the answer is sometimes. The stores that catered to the hardcore Magic crowd most effectively are not usually the stores being rewarded in this new paradigm. Some hardcore customers, catered to by the lowest common denominator stores, are angry and resentful that these "Magic light" stores are getting bones. Sure, the casual players at the high road stores enjoy tablecloths and shiny trash cans, but they're not buying more because of it.

There's two points I want to make about this mismatch between hardcore players and high road stores. First, when someone is truly angry about a business, it means they need them. They want it one way, but it's the other. When a grognardy Magic player is resentful a product or event is being held by the high road store, that's a sign that stores strategy is working. They are needed and it rankles the mercenary customer. This was once reserved for pre releases, where I would see the once a quarter customer scowl at me for existing. How dare you offer something exclusive I need, you sell out.

Second, if you're playing a game from a publisher who doesn't seem to align with your interests, maybe it's because you no longer align with theirs. Maybe your mercenary nature means you'll find your way in the marketplace regardless and you no longer need to be served to such a high degree. Perhaps you've graduated. Perhaps the penalizing of the low road store and reward of the high road is a signal to the customer base that it's time to grow up.


UCLan’s cJAM Media Event, Friday 22 November

The games design course was excited to take part in cJAM: Media last week!
The event that enables our talented students to meet face-to-face with senior industry professionals, to share ideas, make connections and pitch for opportunities.
cJAM events are hosted by the Faculty of Culture and the Creative Industries and the objective is to give our students the opportunity to win placements that will help launch their careers.

The day included:
FREE breakfast and lunch

Giant speed pitching session

Chance to win industry placements

Industry guest speakers

Industry Q&A panel

Networking throughout.

We were so proud to welcome our Alumni, Saija Wintersun, now Senior Environment Artist at Rebellion, Oxford.
Saija spent much of the day reviewing student portfolios and offering her expert advice.





































The Creative Innovation Zone in UCLan's Media Factory was buzzing with conversation as hundreds of students queued for 'speed dating' style interviews with their industry heroes and mentors.

See details of the programme HERE.

Tania Callagher, UCLan Resources Co-ordinator and Richard Albiston, Creative Producer of The Great Northern Creative Expo, must be given utmost credit for arranging this inspiring and exhuberent event which led to 88 placements being awarded to Media students.





























jueves, 10 de septiembre de 2020

Domain Authority 50 for your website - Guaranteed Service

We`ll get your website to have Domain Authority 50 or we`ll refund you every
cent

for only 150 usd, you`ll have DA50 for your website, guaranteed

Order it today:
http://www.str8-creative.co/product/moz-da-seo-plan/

thanks
Alex Peters

viernes, 4 de septiembre de 2020

Scrum In Review: How Did Legion Do?

My last post was about going with Legion out of the three factions I have available, and I went into the last Scrum of 2019 in South Jersey to give them a run.



Not So Minor Complications

Legion was one of the main factions I played in MK2 and I hadn't really played or bought much for them in MK3. Most notably I didn't own what is considered the staple of competitive Legion lists in the current meta: Chosen, Rotwings, and a ton of Incubi to play Kallus1.

I did however have a friend who owned a ton of Incubi and a second unit of Grotesque Raiders that I wanted to test out and he was playing Minions, so I was able to borrow some models for the Scrum.

That all said, the lists I ended up making were largely based on over thinking the meta while I was taking a vacation and it became immediately apparent when I started the scrum that some mistakes were made in list construction:

Kallus1 - Ravens of War
- Succubus
- Ravagore
- Ravagore
- Golab
- Naga
2x Grotesque Raiders
2x Grotesque Assassins
2x Hellmouths
Forsaken
Deathstalker
Sorceress and Hellion

Fyanna 2 - Oracles of Annihilation
- Succubus
- Naga
- Scythean
- Seraph
- Neraph
- Neraph
2x Shepherds
Sorceress and Hellion
Incubus
Forsaken
Full Hex Hunters + Bayal
Throne of Everblight

Vayl1 - Primal Terrors
- Ammok
- Blight Bringer
- Raek
Warmonger War Chief
Full Warmongers + Gorag
2x Full Warspears + Chieftain
2x Hellmouths


How did I do?

I started playing again after a few months off the game at the beginning of August, the Scrum started in September and ended this week. After it all, I went 3-2, though one game was a concession so while I technically have a winning record it's not completely based on my skill.

My only wins came from using Vayl1 of all the lists, and in both games I basically stole the win - I used a Blight Bringer shot plus Vayl to get two boosted spells into their caster and in both cases I got the assassination. In the first game it was vs. Morvana1 where I was able to get boosted blast damage and two spells into her due to a slight misplay by my opponent. In my last game vs Gearheart I actually got enabled by luckily landing a crit stationary off Hoarfrost and then rolling well on my boosted damage rolls.

In both wins my army got utterly demolished on attrition the turn before, and I was able to just pull the win out.

I lost round one in a challenge to my friend who lent me the Legion models. It was Fyanna2 vs. Maelock with 4 units of Posse and two Wrastlers. I basically needed to win the dice roll to go first and try to jam him out of scenario, otherwise none of my lists had the hitting power to get through his feat turn.  I lost the roll and the game spiraled out of my control, though honestly I was on my back foot the entire game.

Second round I played into Iona + Tharn for the first time ever with Kallus. I actually was able to hold the game off a bit due to some key dice rolls going against my opponent and then I had a massive attrition swing - only to have Iona come from downtown to get some Wolf Riders and Deathwolves into her feat range who could then get onto my caster.  Really fun game even though I lost it.

Overall Thoughts

Once I started playing without having a big PT backup list, things felt kind of rough. Having a Kallus1 list with Chosen would have really helped vs. the Gators, and I felt a bit down on my lists as I looked into each matchup it became apparent I was going to be dropping a sub-optimal Vayl1 list that ended up working out due to her always having pocket assassinations. I'm sure there are decent non-Chosen/Rotwing based lists in Legion to play, I just didn't make them for this event and work needs to be done to find them.

I really enjoyed playing in the Scrum again, seeing old faces, meeting new people, and getting to push models around just made me happy in general. I'm definitely looking forward to playing in next year's Scrums when I can.

Going Forward

October to December is a very busy time of year for me and my family due to birthdays, anniversaries, and holidays - so my gaming time is going to be limited and far more casual. There's no way I'm getting out to any tournaments, though I'm hoping to get casual games in at least every other week and hopefully doing an Oblivion Campaign play through.

Later this month the Void Archon comes out which completely changes how Convergence as a faction is going to be played, and their CID is just around the corner, so I'm excited to play them again.

That said, a local player I just met for the first time during the Scrum was selling his Legion and gave me a great deal on all the key models I didn't own: Chosen, Rotwings, and Incubi, so I actually have a lot more tools to make more competitive Legion lists going forward as well. There's a lot I want to explore, and it doesn't necessarily involve just playing the near ubiquitous Kallus1 Primal Terrors list.

Archivo del blog

Con la tecnología de Blogger.

Disqus for La Franquicia de los Servicios a las Personas

wibiya widget

Directorio Blogs

Directorio de Blogs

Suscribirse ahora standard