/evalutor

MT class assignment 4

Primary LanguagePython

\documentclass[a4paper]{article}

\usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage{amsmath} \usepackage{graphicx} \usepackage[colorinlistoftodos]{todonotes}

\textheight 8.5in \topmargin -0.5in

\title{Evaluation} \author{Adam Poliak — apoliak1 : David Russell - drusse19}

\begin{document} \maketitle

\section{METEOR}

\subsection{METEOR Implementation}

As instructed in the assignment, we implemented the METEOR in the script \texttt{./evaluation} by calculating and using precision and recall as described in the assignment. The implementation is contained in the method \texttt{meteor}

\subsection{Running METEOR}

To run our implementation of METEOR, \texttt{./evaluation} must be called with the $\alpha$ flag. To add an $\alpha$ parameter, use the \textbf{-a} flag followed by a decimal number to represent the $\alpha$. If the \textbf{-a} flag is not provided, then the baseline implementation of \texttt{./evaluation} provided will be ran instead.

\subsection{$\alpha$ tuning}

After running \texttt{./evaluation} $100$ times, we determined that the best alpha on the dev data is $\alpha$ = $.87$ resulted in an accuracy of $0.5045$ \newline \newline The graph below shows the results from tuning alpha. The x-axis represents the alphas and the y-axis represents the accuracy for corresponding alphas used in METEOR. \newline \newline \includegraphics[width=\textwidth]{alpha_tuning.jpg}%\textwidth] \newline \newline The data in the graph was generated by running the script \texttt{test\textunderscore alpha} to determine the best $\alpha$. This script takes about $5$ minutes to run.

\section{WordNet Synonyms}

\end{document}