top of page

Predicting ASG Funding Allocations 

EECS 349 Final Project
Nikit Bobba | Nisha Mallya
OUR SERVICES
ABSTRACT

Our project aims to better understand the student group funding process undertaken by Associated Student Government (ASG). Within ASG, the B-Status Funding Committee is responsible for allocating University funds across over 50 student organizations each quarter. Our machine learning tasks aim to predict the percentage of requested funding a student group can expect to receive based on their application. We aim to increase transparency, help group leaders set realistic expectations about the amount of funding they can expect to receive, and help ASG better understand historical funding trends and maintain consistency.

 

In our dataset, our attributes include funding categories most commonly requested by student groups, as well as the total funding pool available and the type of student group. We split our task into two stages - firstly, we created a binary classification based on whether a student group received above 70% of their requested funding. We found that that random forests gave us the best results, producing an accuracy of 77% - a roughly 15% increase above our ZeroR baseline accuracy. Secondly, we attempted to predict the actual percentage of requested funding received by a group. Again, we found that random forests obtained the highest accuracy - with a correlation coefficient of 0.58 and RMSE of 27.55.

 

Overall, we found that the percentage of requested funding a student groups receives is largely driven by the absolute amount of funds requested. In addition, requests for categories such as marketing and venue tend to increase a group’s likelihood of being funded, whereas requests for categories such as food, transport and supplies are more likely to be rejected. In the future, further analysis could incorporate additional features involving characteristics of the group itself, such as the year the group was founded and the number of members. These additional attributes may prove informative for future predictions.

ABSTRACT
FINDINGS

We found that random forests obtained the best results, as these reduce the risk of overfitting to a single decision tree, and instead returns the mode of the class across several trees.

Testing Set Accuracies

FINDINGS
FINAL REPORT
FINAL REPORT
CONTACT US
bottom of page