text From: jk87377@lehtori.cc.tut.fi (Kouhia Juhana)
Subject: Re: More gray levels out of the screen
Organization: Tampere University of Technology
Lines: 21
Distribution: inet
NNTP-Posting-Host: cc.tut.fi
In article <1993Apr6.011605.909@cis.uab.edu> sloan@cis.uab.edu
(Kenneth Sloan) writes:
>
>Why didn't you create 8 grey-level images, and display them for
>1,2,4,8,16,32,64,128... time slices?
By '8 grey level images' you mean 8 items of 1bit images?
It does work(!), but it doesn't work if you have more than 1bit
in your screen and if the screen intensity is non-linear.
With 2 bit per pixel; there could be 1*c_1 + 4*c_2 timing,
this gives 16 levels, but they are linear if screen intensity is
linear.
With 1*c_1 + 2*c_2 it works, but we have to find the best
compinations -- there's 10 levels, but 16 choises; best 10 must be
chosen. Different compinations for the same level, varies a bit, but
the levels keeps their order.
Readers should verify what I wrote... :-)
Juhana Kouhia
category: 0
total_words=len(vocab)
defget_word_2_index(vocab):
word2index= {}
fori,wordinenumerate(vocab):
word2index[word.lower()] =ireturnword2indexword2index=get_word_2_index(vocab)
print("Index of the word 'the':",word2index['the'])
print("Each batch has 100 texts and each matrix has 119930 elements (words):",get_batch(newsgroups_train,1,100)[0].shape)
Each batch has 100 texts and each matrix has 119930 elements (words): (100, 119930)
print("Each batch has 100 labels and each matrix has 3 elements (3 categories):",get_batch(newsgroups_train,1,100)[1].shape)
Each batch has 100 labels and each matrix has 3 elements (3 categories): (100, 3)
# Parameterslearning_rate=0.01training_epochs=10batch_size=150display_step=1# Network Parametersn_hidden_1=100# 1st layer number of featuresn_hidden_2=100# 2nd layer number of featuresn_input=total_words# Words in vocabn_classes=3# Categories: graphics, sci.space and baseballinput_tensor=tf.placeholder(tf.float32,[None, n_input],name="input")
output_tensor=tf.placeholder(tf.float32,[None, n_classes],name="output")
# Launch the graphwithtf.Session() assess:
sess.run(init)
# Training cycleforepochinrange(training_epochs):
avg_cost=0.total_batch=int(len(newsgroups_train.data)/batch_size)
# Loop over all batchesforiinrange(total_batch):
batch_x,batch_y=get_batch(newsgroups_train,i,batch_size)
# Run optimization op (backprop) and cost op (to get loss value)c,_=sess.run([loss,optimizer], feed_dict={input_tensor: batch_x,output_tensor:batch_y})
# Compute average lossavg_cost+=c/total_batch# Display logs per epoch stepifepoch%display_step==0:
print("Epoch:", '%04d'% (epoch+1), "loss=", \
"{:.9f}".format(avg_cost))
print("Optimization Finished!")
# Test modelcorrect_prediction=tf.equal(tf.argmax(prediction, 1), tf.argmax(output_tensor, 1))
# Calculate accuracyaccuracy=tf.reduce_mean(tf.cast(correct_prediction, "float"))
total_test_data=len(newsgroups_test.target)
batch_x_test,batch_y_test=get_batch(newsgroups_test,0,total_test_data)
print("Accuracy:", accuracy.eval({input_tensor: batch_x_test, output_tensor: batch_y_test}))