A micro-game "flappy bird".

Overview

1-o-flappy

A micro-game "flappy bird".

Gameplays

The game will be installed at /usr/bin . The name of it is "1-o-flappy". You can type "1-o-flappy" to start a game. When the game starts, the "bird" will go down automatically. Press enter to control the vertical moving of the "bird" and it will go up. When the "bird" falls down or touch any rolls, the game will be over and it will show "Game over!". Press Ctrl-C to pause in a game. You will never win this game.
The title bar of the game shows: date and time, the time that the game has run(Unit: second), vertical speed, horizontal speed, the range of the rolls, and the score.

Config

When a game starts, it will load a config file named ".1-o-flappy" that be at your home directory. If that file does not exist, the game will create it. If you want to see the default content of this file, please go to next page.
The file must contain these to run the game normally:

  • roll_skin -> a str
  • width -> an int(> 0)
  • p_skin -> a str
  • height -> an int(> 0)
  • blank_skin -> a str
  • speed_v -> an int or a float(> 0) or a str(eval(str) > 0)
  • v_border -> a str
  • speed_h -> an int or a float(> 0) or a str(eval(str) > 0)
  • h_border -> a str
  • high -> an int(> 0)
  • roll_range -> an int(> 0) or a str(type(eval(str)) == int and eval(str) > 0)
  • cooldown_tile -> an int(>= 0) or a str(type(eval(str)) == int and eval(str) >= 0)

roll_skin means the skin of the rolls(the default is "|").
p_skin means the skin of the "bird"(the default is "O").
blank_skin means the skin of the blank area(the default is " ").
v_border means the skin of the vertical border on the right(the default is "").
h_border means the skin of the horizontal border on the bottom(the default is "-").
You can color the skin(like "\033[42m").
The default content of the config file:

# The config file of Flappy.

# Skin config start
roll_skin = "|"
p_skin = "O"
blank_skin = " "
v_border = ""
h_border = "-"
# Skin config end

# Gameplay config start
width = 79
height = 18
speed_v = 20
speed_h = 4
roll_range = 3
cooldown_tile = 16
high = 15
# Gameplay config end

width means the visible width of the checkerboard(the default is 79).
height means the visible height of the checkerboard(the default is 18).
speed_v means the vertical(right to left) moving speed of the "bird"(the default is 20). It is also the speed of the refreshing.
speed_h means the horizontal moving speed of the "bird"(the default is 4).
roll_range means the passable range of the rolls(the default is 3).
cooldown_tile means the value of the least separation of two rolls(the default is 16).
high means the initial height of the "bird".
speed_v, speed_h, roll_range, cooldown_tile can be a str. If it is a str, the game will read it as eval(str). Be sure the eval value of it is a positive, and also be sure the type of the eval value of roll_range or cooldown_tile is an int. Else an error will occur.
Some usable values:

"(time.time()-start_time)" # the value of the time that game has run(Unit: s)
"(score)" # the score of the game
"(high)" # the height of the "bird"
"(time.time())" # the time that be generated by time.time
"(sum(checkerboard))" # the sum of the height of all the rolls

FAQ

Q: Why the name of this game is "1-o-flappy"?
A: The real name of this game is "1-O-Flappy". "1-o-flappy" is the name of the deb archive. "1" refers to "|", it is the default skin of the rolls. "O" is the default skin of the "bird". The "1-O-" helps the name of the game avoid name conflict.
Q: I pressed enter, but why does not the "bird" go up?
A: That seems your did not press enter frequently or the setting of the repeating of the keyboard inputting is too slow.
Q: Why the game looks like a contorted space after modifying the skins in the config file?
A: That is one of the core features of this game, so enjoy it πŸ˜„
Q: Why this game is installed at /usr/bin but not /usr/games ?
A: Treat it equally with other executable files.

You might also like...
Deep Q-learning for playing chrome dino game
Deep Q-learning for playing chrome dino game

[PYTORCH] Deep Q-learning for playing Chrome Dino

A simple pygame dino game which can also be trained and played by a NEAT KI

Dino Game AI Game The game itself was developed with the Pygame module pip install pygame You can also play it yourself by making the dino jump with t

Implementation of QuickDraw - an online game developed by Google, combined with AirGesture - a simple gesture recognition application
Implementation of QuickDraw - an online game developed by Google, combined with AirGesture - a simple gesture recognition application

QuickDraw - AirGesture Introduction Here is my python source code for QuickDraw - an online game developed by google, combined with AirGesture - a sim

A simple Rock-Paper-Scissors game using CV in python

ML18_Rock-Paper-Scissors-using-CV A simple Rock-Paper-Scissors game using CV in python For IITISOC-21 Rules and procedure to play the interactive game

ICS 4u HD project, start before-wards. A curtain shooting game using python.

Touhou-Star-Salvation HDCH ICS 4u HD project, start before-wards. A curtain shooting game using python and pygame. By Jason Li For arts and gameplay,

A Real-Time-Strategy game for Deep Learning research
A Real-Time-Strategy game for Deep Learning research

Description DeepRTS is a high-performance Real-TIme strategy game for Reinforcement Learning research. It is written in C++ for performance, but provi

A customisable game where you have to quickly click on black tiles in order of appearance while avoiding clicking on white squares.

W.I.P-Aim-Memory-Game A customisable game where you have to quickly click on black tiles in order of appearance while avoiding clicking on white squar

A Game-Theoretic Perspective on Risk-Sensitive Reinforcement Learning

Officile code repository for "A Game-Theoretic Perspective on Risk-Sensitive Reinforcement Learning"

Offline Multi-Agent Reinforcement Learning Implementations: Solving Overcooked Game with Data-Driven Method
Offline Multi-Agent Reinforcement Learning Implementations: Solving Overcooked Game with Data-Driven Method

Overcooked-AI We suppose to apply traditional offline reinforcement learning technique to multi-agent algorithm. In this repository, we implemented be

Releases(v0.1.3.59123)
Owner
Just make it.
This package proposes simplified exporting pytorch models to ONNX and TensorRT, and also gives some base interface for model inference.

PyTorch Infer Utils This package proposes simplified exporting pytorch models to ONNX and TensorRT, and also gives some base interface for model infer

Alex Gorodnitskiy 11 Mar 20, 2022
SAGE: Sensitivity-guided Adaptive Learning Rate for Transformers

SAGE: Sensitivity-guided Adaptive Learning Rate for Transformers This repo contains our codes for the paper "No Parameters Left Behind: Sensitivity Gu

Chen Liang 23 Nov 07, 2022
PointCloud Annotation Tools, support to label object bound box, ground, lane and kerb

PointCloud Annotation Tools, support to label object bound box, ground, lane and kerb

halo 368 Dec 06, 2022
Experiments and examples converting Transformers to ONNX

Experiments and examples converting Transformers to ONNX This repository containes experiments and examples on converting different Transformers to ON

Philipp Schmid 4 Dec 24, 2022
[SDM 2022] Towards Similarity-Aware Time-Series Classification

SimTSC This is the PyTorch implementation of SDM2022 paper Towards Similarity-Aware Time-Series Classification. We propose Similarity-Aware Time-Serie

Daochen Zha 49 Dec 27, 2022
This is an open source python repository for various python tests

Welcome to Py-tests This is an open source python repository for various python tests. This is in response to the hacktoberfest2021 challenge. It is a

Yada Martins Tisan 3 Oct 31, 2021
Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch

CoCa - Pytorch Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch. They were able to elegantly fit in contras

Phil Wang 565 Dec 30, 2022
A Self-Supervised Contrastive Learning Framework for Aspect Detection

AspDecSSCL A Self-Supervised Contrastive Learning Framework for Aspect Detection This repository is a pytorch implementation for the following AAAI'21

Tian Shi 30 Dec 28, 2022
Multi-view 3D reconstruction using neural rendering. Unofficial implementation of UNISURF, VolSDF, NeuS and more.

Volume rendering + 3D implicit surface Showcase What? previous: surface rendering; now: volume rendering previous: NeRF's volume density; now: implici

Jianfei Guo 682 Jan 04, 2023
Python Algorithm Interview Book Review

파이썬 μ•Œκ³ λ¦¬μ¦˜ 인터뷰 μ±… 리뷰 리뷰 IT λŒ€κΈ°μ—…μ— λ“€μ–΄κ°€κ³  싢은 λͺ©ν‘œκ°€ μžˆλ‹€. λ‚΄κ°€ κΏˆκΏ”μ˜¨ νšŒμ‚¬μ—μ„œ μΌν•˜λŠ” μ‚¬λžŒλ“€μ˜ λͺ¨μŠ΅μ„ 보면 λ©‹μžˆλ‹€κ³  생각이 λ“€κ³  λ‚˜μ˜ λͺ©ν‘œμ— λŒ€ν•œ 열망이 κ°•ν•΄μ§€λŠ” 것 κ°™λ‹€. 미래의 핡심 사업 쀑 ν•˜λ‚˜μΈ SW 뢀뢄을 이끌고 λ°œμ „μ‹œν‚€λŠ” μš°λ¦¬λ‚˜λΌμ˜ I

SharkBSJ 1 Dec 14, 2021
PGPortfolio: Policy Gradient Portfolio, the source code of "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem"(https://arxiv.org/pdf/1706.10059.pdf).

This is the original implementation of our paper, A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem (arXiv:1706.1

Zhengyao Jiang 1.5k Dec 29, 2022
Implementation of H-UCRL Algorithm

Implementation of H-UCRL Algorithm This repository is an implementation of the H-UCRL algorithm introduced in Curi, S., Berkenkamp, F., & Krause, A. (

Sebastian Curi 25 May 20, 2022
Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision

MLP-Mixer: An all-MLP Architecture for Vision This repo contains PyTorch implementation of MLP-Mixer: An all-MLP Architecture for Vision. Usage : impo

Rishikesh (ΰ€‹ΰ€·ΰ€Ώΰ€•ΰ₯‡ΰ€Ά) 175 Dec 23, 2022
DilatedNet in Keras for image segmentation

Keras implementation of DilatedNet for semantic segmentation A native Keras implementation of semantic segmentation according to Multi-Scale Context A

303 Mar 15, 2022
MQBench: Towards Reproducible and Deployable Model Quantization Benchmark

MQBench: Towards Reproducible and Deployable Model Quantization Benchmark We propose a benchmark to evaluate different quantization algorithms on vari

494 Dec 29, 2022
Article Reranking by Memory-enhanced Key Sentence Matching for Detecting Previously Fact-checked Claims.

MTM This is the official repository of the paper: Article Reranking by Memory-enhanced Key Sentence Matching for Detecting Previously Fact-checked Cla

ICTMCG 13 Sep 17, 2022
A python interface for training Reinforcement Learning bots to battle on pokemon showdown

The pokemon showdown Python environment A Python interface to create battling pokemon agents. poke-env offers an easy-to-use interface for creating ru

Haris Sahovic 184 Dec 30, 2022
Offline Multi-Agent Reinforcement Learning Implementations: Solving Overcooked Game with Data-Driven Method

Overcooked-AI We suppose to apply traditional offline reinforcement learning technique to multi-agent algorithm. In this repository, we implemented be

Baek In-Chang 14 Sep 16, 2022
Implementing yolov4 target detection and tracking based on nao robot

Implementing yolov4 target detection and tracking based on nao robot

6 Apr 19, 2022
This project aims to segment 4 common retinal lesions from Fundus Images.

This project aims to segment 4 common retinal lesions from Fundus Images.

Husam Nujaim 1 Oct 10, 2021